2016.06.14 21:43 See_Sharpies Today I Learned For Programmers
2024.05.13 20:36 InvokeMeWell Question python library similar to simulink
2024.05.13 03:02 frogmirth Fix oversharpened RAW photos from an iPhone using Photoshop
2024.05.12 04:16 Pop_that_belly Why does my bandpass filter not completely block the unwanted frequencies in FFT and pwelch?
I want to create pwelch and FFT of measured signals, each for the original signal and a bandpassed version containing only the frequencies between 20 and 20,000 Hz. Basically, I used the FFT and pwelch commands once on a vector containing the original measurements and once on the vector after applying the MATLAB-internal bandpass filter. submitted by Pop_that_belly to matlab [link] [comments] The result is that there is always a little bit of signal left between 0 and 20 Hz, particularly visible in dB view for PSD. My code and the plots are below. My problems:
https://preview.redd.it/9ny6tf1nlwzc1.png?width=1355&format=png&auto=webp&s=ecd006e5efe32ea9a8178562727fd22a4824342d Here are the results: FFT looks normal to me https://preview.redd.it/ue1ze8yqlwzc1.png?width=734&format=png&auto=webp&s=ea777005525cb5d5809e2c808ecba15c83172979 the big spike from the sensor offset disappears after bandpassing the signal, but there is some amplitude left below 20 Hz (not as easy to see as in PSD, admittedly): https://preview.redd.it/q0sp42swlwzc1.png?width=734&format=png&auto=webp&s=b82ecbf7453dc209f86cfc73d78792b50e363d67 So here is PSD. I am not experienced with this, so I am guessing it looks normal? https://preview.redd.it/6apbykb3mwzc1.png?width=747&format=png&auto=webp&s=0265e0bba842557fb7430b0466830a6963929fbd Again, the sensor offset spike is removed, but again the bandpass leaves some noise below 20 Hz: https://preview.redd.it/u4hltbo6mwzc1.png?width=747&format=png&auto=webp&s=aac29fd587170982a90bd5a50cabbe9fb0d38cd5 |
2024.05.10 18:34 1over3 Learning resources for actively stabilized rockets
2024.05.10 18:31 Ok_Establishment1880 me trying to figure out socksfor1's joke about "tennesse"
submitted by Ok_Establishment1880 to Socksfor1Submissions [link] [comments] |
2024.05.09 18:17 MadeForThisDogPost YACPQ: Feedback/advice for courses in Machine Learning and Computing Systems. New admit Fall 24
2024.05.09 10:28 nuwonuwo all hail Gaussian blur filter. Background not mine (credited in original post)
submitted by nuwonuwo to Ibispaintx [link] [comments] |
2024.05.08 15:42 meticulouslyhopeless How to recreate Mai Yoneyama's post processing effects?
I've recently come across this music video animated by Mai Yoneyama and immediately was enamored by how beautiful it is, I was wondering if anyone knows how to replicate the post processing effects here in After Effects or perhaps CSP? https://youtu.be/yYAgBRO-aT8 submitted by meticulouslyhopeless to animation [link] [comments] https://preview.redd.it/cf7pooltc7zc1.png?width=854&format=png&auto=webp&s=1b960a36cd40c2390c73c23b8d9e299accd80748 https://preview.redd.it/txycrnavc7zc1.png?width=1200&format=png&auto=webp&s=3cdacab6f836dfed127bcfff672ffc1dd71bd9e9 https://preview.redd.it/6hjgw4kyc7zc1.png?width=1920&format=png&auto=webp&s=88fd2afcd022e5a3549530aceabc5dbd9d4fd780 As far as I am aware Mai Yoneyama only has one livestream of her animating and this does not include post processing effects, however it does showcase her using CSP for the roughs and outlines. I have no idea if she has posted further information about this on any of her social media I have tried googling and searching around and I haven't found anything. This includes checking on sakugabooru, none of the posts including genga showed the post processing before and after. https://preview.redd.it/crfyd5czh7zc1.png?width=1361&format=png&auto=webp&s=e3aa3a6069f9002987f612ca9c2af7b48e92e6db https://preview.redd.it/xq033ig0i7zc1.png?width=1718&format=png&auto=webp&s=206ba8211e12b8c6ded135633397d4ef0a03ba15 https://preview.redd.it/ia5s7rk1i7zc1.png?width=1294&format=png&auto=webp&s=6887ea4e342f50fed348f5a0b247f4c44913ac81 But yea, just wondering if anyone could lend any information about this I REALLY love this style of animation! (If there are ANY sources out there to replicate something similar to this I will take it, even if not related to Mai Yoneyama specifically) It's so painterly... I think the most I can make out is possibly use of a RGB filter, gaussian blurs in several areas, and colored outlines helping the feel of the scene. The RGB effect is confusing to me however it seems like some areas are effects by a rgb filter and others are not? I'm not really sure how that works here... There is the possibility it is not even RGB in the first place, maybe a overlay filter. Would love to hear others thoughts! . . . 5/8/2024 EDIT: Ok, I have been looking more into it! As it turns out some of the blur effects aren't just gaussian blur, but rather what is called "bokeh blur"! Has a more out of focus camera esc effect. I've been looking at how other anime does their post processing- one creator I have looked at was Makoto Shinkai, and there is a couple of screenshots of them using the After Effects program. https://preview.redd.it/9npiwcx2x8zc1.png?width=1200&format=png&auto=webp&s=f19bc4b6cbfe38723a7019cf37b30dd271022661 https://preview.redd.it/9dcy5md3x8zc1.png?width=1200&format=png&auto=webp&s=96e3757cfc349c353afbcfd052eb332896b9ecc8 https://preview.redd.it/vu3ohbk5x8zc1.png?width=1200&format=png&auto=webp&s=16280c95379e506a27f4316132781dfd13b44a86 https://preview.redd.it/2ak42iu5x8zc1.png?width=1200&format=png&auto=webp&s=fb292519bb5cdcdff87f09f08573fc4e06d8de1f https://preview.redd.it/e6vtoua7x8zc1.png?width=984&format=png&auto=webp&s=73788156d026ff1d00b73a5bd5d8c8a431c33dbb This Reddit post also seems to be helpful in providing information about compositing in anime. https://www.reddit.com/anime/comments/pio41i/any_idea_where_i_can_find_info_about_the/ I suppose if I actually messed around with After Effects i'd be able to figure this stuff out alot easier... |
2024.05.07 23:48 Reasonable-Neck-6800 Applied to over 3k+ jobs and still no positive response/interviews. What am I doing wrong? Any help would be highly appreciated🙏
submitted by Reasonable-Neck-6800 to resumes [link] [comments]
2024.05.04 09:32 2starofthesea1 Digital Print on Silk (by the yard) for textiles - File prep questions
Hi. I need a bit of assistance in understanding how to prepare some files for digital print on silk by the yard. submitted by 2starofthesea1 to photoshop [link] [comments] Context: I have scanned several watercolor paintings that need to be edited in Photoshop and printed on garments - I created a document at the actual size of each pattern piece (the biggest one has a height of 45"). Not a repeat pattern. Scanned every painting at 1200 dpi and embedded each image in my file. Each scan is on a different layer, and it has either a layer mask or smart filters applied (mainly motion and gaussian blur). Used "Multiply" as blending mode for most of my layers. Color profile is CMYK U.S. web coated swop v2, 300 DPI. Questions:
https://preview.redd.it/9e4nijthhdyc1.jpg?width=563&format=pjpg&auto=webp&s=6abdea263f5fa0f03888689292c1d5b1b60575ca https://preview.redd.it/fyx9knthhdyc1.jpg?width=564&format=pjpg&auto=webp&s=8998725699ce5f39b126d03f4ae862c1f23badfe |
2024.05.02 13:59 Less_Bandicoot_9213 Getting MATLAB Assignment Help. Is It a Good Idea?
2024.05.02 03:09 Gleeful_Gecko Preparing for career in control
2024.05.01 21:00 SuperbSpider How to edit ROI?
https://preview.redd.it/fufu4xcd3vxc1.png?width=558&format=png&auto=webp&s=ab18d50eecd2725bf95d5790dd41c840bd378c5b submitted by SuperbSpider to ImageJ [link] [comments] Apologies if this seems too basic a question. I am an ImageJ beginner and I am still figuring out how to use it. For this image, I applied a Gaussian filter, then used thresholding to create a binary image, and used that image to create a selection, saved it as an ROI, and applied it to my original image. Now I would like to edit the ROI to remove sections I am not interested in analyzing. For example, on this image there is a chunk of tissue with irregular borders (upper right side) that I want to remove from my ROI. How do I do that? |
2024.05.01 19:07 EARTHB-24 The ALMA
1. Unique Formula: ALMA uses a proprietary formula that incorporates Gaussian distributions to calculate the moving average. This formula assigns different weights to recent price data points based on their distance from the current price. As a result, ALMA gives more weight to recent price movements while still considering historical data. 2. Adaptive Nature: ALMA is adaptive, meaning it adjusts dynamically to changes in market conditions. It can adapt its smoothing parameters based on market volatility, allowing it to respond more quickly to price changes during periods of high volatility and provide smoother signals during stable market conditions. 3. Reduced Lag: Compared to traditional moving averages, ALMA aims to reduce lag by providing more timely signals of trend changes. Its adaptive nature and unique formula help to filter out noise and provide a smoother representation of price trends. 4. Customizable Parameters: ALMA allows traders to customize its parameters, such as the look-back period and the smoothing factor, to suit their trading preferences and the characteristics of the financial instrument being analyzed. Adjusting these parameters can fine-tune the sensitivity and responsiveness of the indicator. 5. Versatility: ALMA can be used in various trading strategies, including trend following, trend reversal, and momentum trading. Traders often use ALMA in conjunction with other technical indicators, chart patterns, and trading signals to confirm trends and identify entry and exit points. 6. Interpretation: In practice, traders typically interpret ALMA signals similarly to other moving averages. Bullish signals occur when the price crosses above the ALMA line, indicating a potential uptrend, while bearish signals occur when the price crosses below the ALMA line, suggesting a possible downtrend.Overall, the Arnaud Legoux Moving Average (ALMA) is a versatile technical indicator that aims to provide smoother and more responsive trend signals compared to traditional moving averages. Its adaptive nature and unique formula make it a valuable tool for traders and analysts seeking to identify trends and make informed trading decisions in financial markets.
2024.04.29 09:48 mangomanga201 Trying to understand the FIR filters on MATLAB
2024.04.26 22:37 bboys1234 [0 YoE] Hardly getting interviews, what can I do to improve my resume?
Hello! I read the wiki and have updated my resume as best as I can. What can I improve upon? submitted by bboys1234 to EngineeringResumes [link] [comments] Context: I am graduating in a few weeks from an ABET accredited BSME program, and have been applying to jobs (about 50 so far) over the past few months. So far I have had one interview with the big shiny rocket company in Texas, and made it to the final round but unfortunate didn't get the offer. That has been my only interview. I am looking to do design or r&d, but am open to anything that lets me get hands on and solve problems. Ideally, I'd like to be in the northeast. I've been a follower of this sub for a while, and think my resume is decent but want to know if anything stands out or could be changed to be better. Thank you! https://preview.redd.it/tex0k133xvwc1.png?width=5100&format=png&auto=webp&s=81823ddc904de724473ab288d0f4c5f595767edc |
2024.04.24 16:22 JuicyLegend OpENF - Update on Phase 1 & 2
Hello Everyone, submitted by JuicyLegend to TheMysteriousSong [link] [comments] It has been long overdue that I made an update on my progress, but there has just been so much going on. I once more want to thank you all for being so supportive and helpful. I have been busy in trying to build a database from seismic data. Which succeeded to some extend but not as much as I hoped. I did learn some interesting things because of u/omepiet his work in aligning the songs exactly. So now for the updates: Phase 1 - UpdateI managed to plot all of the ENF spectra into a plot and they now line up perfectly! I also took the liberty to change the pitch and speed again for all of the songs by u/omepiet, because I really think the speed and pitch needed to be corrected. I made the assumption that instead of the 10 kHz line being completely exact, I assumed that the 15,625 kHz was exact. I assume this because when the recording of Compilation A was done, the CRT TV source must have been really close by and just the way the CRT technology works, required it to be really exact. Of course there is the possibility that the TV was broken, but that is quite unlikely to happen to be broken and on at the same time. Somebody was probably just using it at the time, or it might have been a computer screen I don't know. In any case here are the plots:TMS Plots for the ENF range around 50 Hz So as you can see on the plot, there are 6 lines in the legend and only 2 are visible. That is because the first 3 perfectly line up on the yellow line and the last 3 perfectly line up on the blue line. So that means that in all cases, we are dealing with the exact same signal and thus the same recording! Note here that the blue line is the line where I plot the versions of TMS that I adjusted in Pitch and Speed to match the CRT line at 15,625 kHz. That brings the 10 kHz line a little bit higher to about 10,150 kHz. Or I just f'd something up that can align both lines in the way they should. I personally think the song sounds much better with this adjustment and you can listen to them for yourself here: TMS Adjusted I played a lot with filters the past week and especially Butterworth filters. That is also what I've been using to create these plots. While playing with the filters, I made the bandpass reaaaally narrow around 50 Hz with a 1 kHz resampling rate and discovered something interesting. It so happens to be that there is a very clear triangle/square wave present in that band. For different orders of the Butterworth filter, I made new wave files (that are twice as long now, I guess either because of resampling or a bug somewhere). You can find the (audacity) files for that here: Filtered Waveforms I also managed to make different power spectra for different orders of the Butterworth Filter: Powerspectrum of Butterworth Filtered n=1 TMS-new 32-bit PCM Powerspectrum of Butterworth Filtered n=2 TMS-new 32-bit PCM Powerspectrum of Butterworth Filtered n=3 TMS-new 32-bit PCM (For the people who it may concern, yes the leakage of the 2nd and 3rd plot is higher than the 1st, because the harmonics weren't showing up due to low amplitude. But they clearly showed up in audicity once I increased the amplitude. 1st: 0.91, 2nd: 0.2, 3rd: 0.2) So as you can see from the spectra, there are clear peaks at 50, 150, 250, 350, 450,... Hz. This to me looks like either a carrier signal from the FM broadcast or a much more likely signal from the FM synthesizer, aka the Yamaha DX7. So this already goes far beyond my knowledge of these kinds of things, but if any of you are able to reproduce such a signal with the synthesizer, then I could use it to subtract that from the waveform, since it is convoluted with the power waveform. You can see evidence of the powergrid waveform if you look at the very small peaks in the 2nd and 3rd plot at 100, 200 and 300 Hz, which are harmonics of the powerspectrum. One general thing that I have noticed while trying to isolate the grid frequency, is that the signal around 50 Hz tends to always be above 50 Hz. This probably means that TMS was recorded during a time of high energy supply on the grid, which generally tends to be in the evening/at night. I provided a picture for demonstration purposes. An overview of what happens at different grid frequencies I feel that when the removal step of the triangle/sawtooth waveform is completed, we stand a much better chance at recovering the true ENF signal. I look forward to your opinions about it :) Phase 2 - UpdateI've been working hard in trying to find a suitable database to try and create a reference database for the ENF Signal. So like I said in my previous post, I started exploring Seismic databases. One of which (the most interesting in my opinion) can be found here:EPOS Database It takes a little practice to navigate it but I mainly just searched for data from 1983 to 1985. And boy did I get a lot of data. It took my pc more than a day to make all of the plots from 40 Gb of seismic data, lmao. However, very very unfortunately, the most interesting dates i.e. the 4th, 28th of September and the 28th of November, don't have a lot or any information :'( . The sampling frequency is also rather low unfortunately. While I was pretty hopeful at first when I found out that the seismic data had been sampled at 100 Hz, due to the Nyquist frequency this is just short of being able to be handled well, unless someone knows a few tricks perhaps. In any case, there are still some options left that are still worth exploring.
Waveform Data Just for fun, here is a picture of what one of the shorter plots looks like: Waveform data for CH-ACB-SHE at 100 Hz sampling frequency. In the excel sheet you can find all of the available data for a certain time period for certain netowrks, stations and channels. I found (at least from this run) that only ETH (the swiss seismic research) contains worthwhile data. The quest goes on and I still feel very optimistic about finding the time and date of TMS with probabilistic certainty. Thank you all again for all of your efforts! I wish you all a good week and I look forward to your reactions once more! 🥔 P.s. It might take some time (few weeks) before my next post as I have some personal business to attend to. Nonetheless I will keep a close eye on any comments and will be available from time to time in the discord server. |
2024.04.18 18:46 Cerricola how to automate state-space representation matrices?
2024.04.18 12:52 Cerricola Need help optimizing this function.
ofn <- function(th) { # OFN Summary of this function goes here: # performs the Kalman filter operation for a state-space model. # calculates the likelihood of the model. # iterates over all obs predicting and updating the factor (beta) and P. # calculates likelihood at each step and returns the negative sum. # used for maximum likelihood estimation to find the parameter values. # Given th, we obtain matrices R, Q, H, F matrices_list <- matrices(th) R <- matrices_list$Rs Q <- matrices_list$Qs H <- matrices_list$Hs F <- matrices_list$Fs beta00 <- c(rep(0, 10)) # Matrix (10x1) P00 <- diag(10) # Identity Matrix (10x10) like <- numeric(captst) # Vector (T-1x1) # KALMAN FILTER it <- 1 # Kalman filter iterations while (it <= captst) { # While iteration number is smaller than or equal to number of observations beta10 <- F %*% beta00 # Prediction equations P10 <- F %*% P00 %*% t(F) + Q # beta is the hidden variable n10 <- yv[it,] - H %*% beta10 # Error forecast F10 <- H %*% P10 %*% t(H) + R # Likelihood calculation like[it] <- -0.5 * (log(2 * pi * det(F10)) + (t(n10) %*% solve(F10) %*% n10)) # Likelihood function (gaussian) K <- P10 %*% t(H) %*% solve(F10) # Kalman gain beta11 <- beta10 + K %*% n10 # Updating equations filter[it,] <- t(beta11) P11 <- P10 - K %*% H %*% P10 beta00 <- beta11 P00 <- P11 it <- it + 1 # Iterating } fun <- -(sum(like)) # Sum of the likelihood return(fun) }But when I optimize it, it takes a lot of time to converge:
# Define the options for the optimization function options <- list(maxit = 100) # The 'optim' function in R is similar to 'fminunc' in MATLAB result <- optim(par = startval, fn = function(x) ofn(x), method = "BFGS", control = options, hessian = TRUE)I have tried to vectorize it, but the problem stills:
ofn_v <- function(th) { # OFN Summary of this function goes here: # performs the Kalman filter operation for a state-space model. # calculates the likelihood of the model. # iterates over all obs predicting and updating the factor (beta) and P. # calculates likelihood at each step and returns the negative sum. # used for maximum likelihood estimation to find the parameter values. # Obtain matrices R, Q, H, F from th matrices_list <- matrices(th) R <- matrices_list$Rs Q <- matrices_list$Qs H <- matrices_list$Hs F <- matrices_list$Fs # Initial states beta00 <- c(rep(0, 10)) # Matrix (10x1) P00 <- diag(10) # Identity Matrix (10x10) like <- numeric(captst) # Vector (T-1x1) # Define a function to perform operations for each time step kalman_step <- function(it) { beta10 <- F %*% beta00 # Prediction equations P10 <- F %*% P00 %*% t(F) + Q # Prediction of P n10 <- yv[it,] - H %*% beta10 # Error forecast F10 <- H %*% P10 %*% t(H) + R # Forecast error variance # Likelihood calculation like[it] <<- -0.5 * (log(2 * pi * det(F10)) + (t(n10) %*% solve(F10) %*% n10)) # Likelihood function (gaussian) K <- P10 %*% t(H) %*% solve(F10) # Kalman gain beta11 <- beta10 + K %*% n10 # State update filter[it,] <<- t(beta11) P11 <- P10 - K %*% H %*% P10 # Covariance update # Update states for next iteration list(beta11, P11) } # Run the Kalman filter over all time steps results <- lapply(seq_len(captst), kalman_step) # Extract the final states from the last iteration final_states <- results[[length(results)]] beta00 <- final_states[[1]] P00 <- final_states[[2]] # Return negative sum of likelihood -sum(like) }Here is the rest of the code:
matrices <- function(z) { # Assuming 'n' and 'vfq' are already defined # Procedure to obtain Kalman matrices Rs <- matrix(0, n, n) # Empty matrix (n x n) h2 <- rbind( c(1, 0, 0, 0, 0, 0, 0, 0), # Manually defined Matrix c(0, 0, 1, 0, 0, 0, 0, 0), c(0, 0, 0, 0, 1, 0, 0, 0), c(0, 0, 0, 0, 0, 0, 1, 0) ) Hs <- cbind(z[1:n], matrix(0, n, 1), h2) # Importing data from vector z to matrix Hs z0 <- z[(n+1):(n+2)] z1 <- z[(n+3):(n+4)] z2 <- z[(n+5):(n+6)] z3 <- z[(n+7):(n+8)] z4 <- z[(n+9):(n+10)] f1 <- c(t(z0), rep(0, 8)) # Manually creating rows of matrix F f2 <- c(1, 0, rep(0, 8)) f3 <- c(rep(0, 2), t(z1), rep(0, 6)) f4 <- c(rep(0, 2), 1, rep(0, 7)) f5 <- c(rep(0, 4), t(z2), rep(0, 4)) f6 <- c(rep(0, 4), 1, rep(0, 5)) f7 <- c(rep(0, 6), t(z3), rep(0, 2)) f8 <- c(rep(0, 6), 1, rep(0, 3)) f9 <- c(rep(0, 8), t(z4)) f10 <- c(rep(0, 8), 1,0) Fs <- rbind(f1, f2, f3, f4, f5, f6, f7, f8, f9, f10) # Concatenating matrix Fs z2 <- kronecker(z[(n+11):(n+14)]^2, c(1, 0)) # Vector Multiplication Qs <- diag(c(vfq, 0, z2)) # Manually creating matrix Qs return(list(Rs = Rs, Qs = Qs, Hs = Hs, Fs = Fs)) }The initial part of the code:
#### Stock Watson Kalman Filter #### # R 4.3.3. # UTF-8 # 11/04/24 # Clear environment and console rm(list=ls()) cat("\014") # Imports # libraries library(readxl) library(stats) library(tidyverse) # functions source('matrices.R') source('ofn.R') source('kfilter.R') # Data Input yv <- read_excel("rawdata2.xls", range = "S87:V854", col_names = FALSE) # Importing xlsx file (cell B2 to Cell E633) T <- nrow(yv) # Number of Observations n <- ncol(yv) # Number of variables # Renaming the variable yv <- scale(yv) # Standarize to a N(0,1) vfq <- 1 # Normalized variance B <- c(0.9, 0.8, 0.7, 0.6) # Vector (n x 1) phif <- rep(0.3, 2) # Vector 0.3(2x1) phiy <- rep(0.3, n*2) # Vector 0.3(2n x 1) v <- apply(yv, 2, sd) # Standard Deviation of yv startval <- c(B, phif, phiy, v) # Vector Concatenation nth <- length(startval) # nth is number of parameters to be estimated captst <- T filter <- matrix(0, nrow=captst, ncol=10) # Filter inferences #### Maximizing the likelihood function #### # Define the options for the optimization function options <- list(maxit = 100) # The 'optim' function in R is similar to 'fminunc' in MATLAB result <- optim(par = startval, fn = function(x) ofn(x), method = "BFGS", control = options, hessian = TRUE)Thank you in advance for your time :)
2024.04.17 03:30 BobzNVagan We are not evil - Art Share!
Hello! submitted by BobzNVagan to ClipStudio [link] [comments] First time posting here but thought I’d like to share. Name is Vesnu Studio and I am a free lance digital artist! I know everyone has their issues with clip studio paint 3 but I am enjoying it so far! Here is some art on a project I am working on ((Shan’t spoil too much ;3)) I originally made the sketch on this on my boyfriends brothers iPad using clip studio with the normal sketch brush that comes installed, then once I got back on to my main pc, using my cintiq 24, I finally go around to finishing it off! I mainly do adult material within the furry fandom so to show off a snippet of my work, I came up with this concept piece called “We are not evil!” If you would like to see more of my work, you can check me out on Furaffinity at “vesnu” or you can check out my website at www.vesnustudio.com — for telegram users, you can join my channel at https://t.me/VesnuStudios ((18+ only!)) Critique is a must as I strive to improve and learn more as the colouring shading stage in digital art is still something I struggle with and find difficult to get into a groove! ————————— Brushes used: Sketches: - Basic sketch brushes that come with the application ((first used the initial brushes name I can’t remember, while I used mechincal to clean up the sketch Lines: - Line work brush was the “7Havoc” pen, with opacity at 80% ((it’s at 100 default)) - Line pen is also at full anti aliasing Colouring: - Fill bucket, selecting background first and inverting with a flat base, then doing it in sections Shading: - Lasso tool fill - Wet Indian inks - minor use of crayon and pastel - Wet pen set ((Can’t remember name - will update post in next edit in a few hours)) - Knife Brush ((found in asset store)) ((Will update more in next few hours when on pc)) Misc:
If it were possible to post more than one image, my workflow would be here to show you all how I work and you could give me more critique to be efficient and not take the long way at times that could achieve the same effect in a second |
2024.04.17 02:07 MArcherCD The Astonishing Ant-Man: Ghosts of The Past
2024.04.14 18:52 SwellsInMoisture 10 years ago, we discussed how we select stocks. Here's the 2024 version.