Simplifying fractions with variables and exponents calculator

Financial Independence / Retire Early

2011.11.10 16:15 Financial Independence / Retire Early

This is a place for people who are or want to become Financially Independent (FI), which means not having to work for money. Financial Independence is closely related to the concept of Early Retirement/Retiring Early (RE) - quitting your job/career and pursuing other activities with your time. At its core, FI/RE is about maximizing your savings rate (through less spending and/or higher income) to achieve FI and have the freedom to RE as fast as possible.
[link]


2011.12.24 21:04 blitz0x Stellar

Stellar is a decentralized protocol that enables you to send money to anyone in the world, for fractions of a penny, instantly, and in any currency. Stellar is for news, announcements and open-discussion related to Stellar and its community. Stellar is not officially maintained or moderated by the Stellar Development Foundation.
[link]


2014.08.24 03:27 Broiledvictory Criticism of Kickstarter, Indiegogo, and other crowdfunding projects

A place for criticizing individual projects on Kickstarter, Indiegogo, and similar platforms. Idealistically, the goal is to allow for some accountability for projects and to hopefully fight scams.
[link]


2024.04.29 00:15 TangoJavaTJ How would we go about estimating the probability that a given team wins the premier league?

I was chatting with my boyfriend about that time Leicester won the premier league. Apparently some betting companies had given odds of 1/5000, which to me seemed intuitively about right.
But how would we go about doing this rigorously?
Laplace has an estimate for how we should calculate the expected value of a binomial random variable of unknown probability, which works like this:
We assign it 1 “heads” and 1 “tails”, then record the actual observed values.
So a coin which has actually landed HTH would be estimated to have a 60% chance of landing H.
Obviously we can’t start by assigning every team 1W and 1L since then the probability that each team wins is 50% which doesn’t make sense, so maybe we should instead extend Laplace’s idea and assign each team 1W and 19L so they each start with a 5% chance of winning, and then continue to proceed with their actually observed wins.
But this also doesn’t feel like it does so well because if Leicester had (hypothetically) won the premier league for the first 10 years of its existence then the probability that they win the 11th year feels like it should be a lot more than 11/30.
Also it doesn’t seem like a team whose record is WWLLLLLLLL is as likely to win as a team whose record is LLLLLLLLWW: the team that won it more in recent years is clearly more likely to win this year, since they still have most of the same players who won it last year.
So instead I considered using a Bayesian approach. Perhaps each team is given a Bayesian prior of 5% in the first season and then we update the priors according to how well each team does each season.
Probability(hypothesis evidence) = probability(evidence hypothesis) probability(hypothesis) / probability (evidence)
So our evidence at each iteration of our Bayesian model is the position the team finished this season. It seems like we need some way to update our priors such that the higher the team finished in the league, the higher the prior that they will win next season.
I’m trying to come up with some way to update the priors according to where each team finished in the league. Obviously finishing higher should increase the priors and finishing lower should decrease them, and the sum of all the next generation of probabilities still has to add to 1.
P(E) and P(EH) don’t seem to have obvious values here which I think is what I’m struggling with. How might I approach this?
submitted by TangoJavaTJ to learnmath [link] [comments]


2024.04.29 00:14 Dingohh Automating my 1099's payroll using sheets, need a little help

Right now I have 18 contractors getting weekly paychecks, their pay is commission based.
I have a small sheet I have made to help calculate their pay a little quicker than doing it fully manual but, I have a feeling I can make it even better and further automize the process.
Right now I have 7 columns and 18 lines, the columns are; A (Contractor ID) B (Contractor Name) C (Quantity of Sales) D (Gross Sales) E (Tips) F (Commission based off of gross) G (Net Earnings)
I get the gross sales and all information needed to calculate their pay emailed to me weekly with each contractor all on one PDF. Each contractor has a different commission rate and I have each line specified to the individual in that specific line, for example: Amanda would make 50% commission so I would have Amandas line look something like this, assuming she is line 2. A (Example#) B (Amanda Example) C (20) D ($1,000.00) E ($100.00) F (=D2*0.50) G (=E2+F2)
As of right now I read the sales report I get emailed to me and manually input sales for each contractor and basically want to know if there is a way that I could take the PDF emailed to me, convert into CSV or something and instantly copy paste just the useful information I would need, column by column, and have those customized commission formulas be synchronized to the individuals name rather than the line because sometimes not all of my contractors worked each week so if I was to just copy paste the columns, one absent contractor would cause every other one beneath to be miss-placed.
To try and simplify the question... Can I have google sheets recognize a name such as, "Amanda Example" and have it know that on column F it needs to be =D?*0.50 regardless of the line that this particular contractor may end up on?
submitted by Dingohh to googlesheets [link] [comments]


2024.04.28 23:45 lazy_warlord 3D Spinning Cube in terminal.

Hi all!
So I have been trying to make a console-based 3D spinning cube program. the code that i have managed to write is as follows:
import math 
import time import os

Variables

cubeWidth = 20 width = 80 height = 44 zBuffer = [0] * (width * height) buffer = [' '] * (width * height) # Initialize buffer with spaces distance_from_cam = 100 K1 = 40.0

Set a value for this in the main program

horizontalOffset = 0
increment_speed = 1
def CalculateX(i, j, k, A, B, C): return j * math.sin(A) * math.sin(B) * math.cos(C) - k * math.cos(A) * math.sin(B) * math.cos(C) + j * math.cos(A) * math.sin(C) + k * math.sin(A) * math.sin(C) + i * math.cos(B) * math.cos(C)
def CalculateY(i, j, k, A, B, C): return j * math.cos(A) * math.cos(C) + k * math.sin(A) * math.cos(C) - j * math.sin(A) * math.sin(B) * math.sin(C) + k * math.cos(A) * math.sin(B) * math.sin(C) - i * math.cos(B) * math.sin(C)
def CalculateZ(i, j, k, A, B): return k * math.cos(A) * math.cos(B) - j * math.sin(A) * math.cos(B) + i * math.sin(B)
def calculateForSurface(cubeX, cubeY, cubeZ, ch, A, B, C): x = CalculateX(cubeX, cubeY, cubeZ, A, B, C) y = CalculateY(cubeX, cubeY, cubeZ, A, B, C) z = CalculateZ(cubeX, cubeY, cubeZ, A, B) + distance_from_cam
# Calculate perspective projection ooz = 1 / z xp = int((width / 2) + horizontalOffset + K1 * ooz * x * 2) yp = int((height / 2) + K1 * ooz * y) idx = xp + yp * width if 0 <= idx < width * height: if ooz > zBuffer[idx]: zBuffer[idx] = ooz buffer[idx] = ch 

Function to clear the screen

def clear_screen(): os.system('cls' if os.name == 'nt' else 'clear') print('\033[H\033[J', end='') # ANSI escape codes to clear the screen and move cursor to top left
start_time = time.time()
while True: elapsed_time = time.time() - start_time
# Clear the screen clear_screen() # Reset zBuffer for each frame zBuffer = [0] * (width * height) # Reset horizontal offset horizontalOffset = 0 # Calculate rotation angles based on elapsed time rotation_speed_x = 1 rotation_speed_y = 2 rotation_speed_z = 0.5 A = rotation_speed_x * elapsed_time B = rotation_speed_y * elapsed_time C = rotation_speed_z * elapsed_time # Rendering code cubeWidth = 20 # Render the cubes for cubeX in range(-cubeWidth, cubeWidth, increment_speed): for cubeY in range(-cubeWidth, cubeWidth, increment_speed): calculateForSurface(cubeX, cubeY, -cubeWidth, '@', A, B, C) calculateForSurface(cubeWidth, cubeY, cubeX, '$', A, B, C) calculateForSurface(-cubeWidth, cubeY, -cubeX, '~', A, B, C) calculateForSurface(-cubeX, cubeY, cubeWidth, '#', A, B, C) calculateForSurface(cubeX, -cubeWidth, -cubeY, ';', A, B, C) calculateForSurface(cubeX, cubeWidth, cubeY, '+', A, B, C) # Print the buffer os.system('cls' if os.name == 'nt' else 'clear') for k in range(0, width * height, 1): print(buffer[k] if k % width != 0 else '\n', end='') clear_screen() time.sleep(1) # Adjust the sleep time for frame rate 
However I have the following problems with this code:
1: the output is not a cube(?????). its as the surface of the cube that is behind is not rotating and is hence giving the output a sort of diamond shape. idk
  1. The second is that the output is on different lines, and not clearing the entire terminal screen properly.

idk what is happening, I feel so dumb, this coding thing is just not for me. Please help!
submitted by lazy_warlord to CodingHelp [link] [comments]


2024.04.28 23:21 Filumena_D **Update** The lyrics of the song


So, I decided to download the latest patch. They messed up again somewhere, but it's supposed to be a new update for the RU-BDO server. **New update, new update!** All my PvE dreams will come true with this new patch. 700 GS per month, you know? New update, fishing and horse stuff. New update, minus 10k karma. Once again, the update simplifies the game by randomly assigning heroes, but you won't have balance. The trash class will add damage to mana sticks and water fish, and it's sad that there will be lag in the game. Another update, server patch, no more invulnerability. FPS won't work, and I died in the clouds again. The update affects heroes and sieges. The review took 5 hours. The clan arena is trash and the hero editor is one, the variables are removed, and there are long sieges again! You can go free to test the combo, but soon you'll be reading thousands of words again for a few hours.Because the patch is new, rebalancing is happening. Two years later, a new solar patch. A new leader is leaving, and HI-VAR is fading away. There are almost no communities left. You go to your favorite spot, and there's some crazy guy kicking mobs. He's already on the forum, chin-choking, "May there be salvation for you!" You catch a horse, but it's bullshit for beginners. This is for beginners, and a free T9 horse. It's the fifth item to choose from and it doesn't feel like zero anymore. You come to finish it, "I'm a beginner!!! How is that possible?" A new patch restores the stones for donations. A guarantor for sharpening, 50 drops every month. Not a shit eater, but a mixer! You take part in a siege and there are 12 piles nearby. We survived as best as we could., but for some reason, the servers just crashed. You know, they were cutting up some promo codes, and we figured out how to balance the server for you in half a year! New patch, new PVP, new patch, new barcode fired in the basement, new patch from the developers, who ignored everyone again, new patch from PEARL ABYSS, and another new patch from PEARL ABYSS.
submitted by Filumena_D to BDO_Killky [link] [comments]


2024.04.28 22:37 Titty_Slicer_5000 Tensorflow Strided Slice Error. Need help.

TLDR at the bottom
My Full Tensorflow Code: Link. Please excuse all the different commented out parts of code, I've had a long road of trouble shooting this code.
Hardware and Software Setup
-Virtual Machine on Runpod
-NVIDIA A100 GPU
-Tensorflow 2.15
-CUDA 12.2
-cuDNN 8.9
What I'm doing and the issue I'm facing
I am trying to creating a visual generator AI, and to that end I am trying to implement the TGANv2 architecture in Tensorflow. The TGANv2 model I am following was originally written in Chainer by some researchers. I also implemented it in Pytorch (here is my PyTorch code if you are interested) and also ran it in Chainer. It works fine in both. But when I try to implement it in Tensorflow I start running into this error:
Traceback (most recent call last): File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/ops/script_ops.py", line 270, in __call__ ret = func(*args) ^^^^^^^^^^^ File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/autograph/impl/api.py", line 643, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/data/ops/from_generator_op.py", line 198, in generator_py_func values = next(generator_state.get_iterator(iterator_id)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/3TF-TGANv2.py", line 140, in __iter__ yield self[idx] ~~~~^^^^^ File "/workspace/3TF-TGANv2.py", line 126, in __getitem__ x2 = self.sub_sample(x1) ^^^^^^^^^^^^^^^^^^^ File "/workspace/3TF-TGANv2.py", line 99, in sub_sample x = tf.strided_slice(x, begin, end, strides) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler raise e.with_traceback(filtered_tb) from None File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/eageexecute.py", line 59, in quick_execute except TypeError as e: tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__StridedSlice_device_/job:localhost/replica:0/task:0/device:GPU:0}} Expected begin and size arguments to be 1-D tensors of size 2, but got shapes [4] and [2] instead. [Op:StridedSlice] 
What's important to note about this issue is that it does not come up right away. It can go through dozens of batches before this issue pops up. This error was generated with a batch size of 16, but if I lower my batch size to 8 I can even get it to run for 5 epochs (longest I've tried). The outputs of the Generator are not what I saw with Chainer or Pytorch after 5 epochs (it's mostly just videos of a giant black blob), though I am unsure if this is related to the issue. So with a batch size of 8 sometimes the issue comes up and sometimes it doesn't. If I lower the batch size to 4, the issue almost never comes up. The fact that this is batch size driven really perplexes me. I've tried it with multiple different GPUs.
Description of relevant parts of model and code
The way the Generator works is as follows. There is a CLSTM layer that generates 16 features maps that have a 4x4 resolution and 1024 channels each. Each feature map corresponds to a frame of the output video (the output video has 16 frames and runs at 8fps, so it's a 2 second long gif).
During inference each feature map passes through 6 upsampling blocks, with each upsampling block doubling the resolution and halving the channels. So after 6 blocks the shape of each frame is (256, 256, 16), so it has a 256p resolution and 16 channels. Each frame then gets rendered by a rendering block to render it into a 3-channel image, of shape (256, 256, 3). So the final shape of the output video is (16, 256, 256, 3) = (T, H, W, C), where T is the number of frame, H is the height, W the width, and C the number of channels. This output is a single tensor.
During training the setup is a bit different. The generated output video will be split up into 4 "sub-videos", each of varying resolution and frames. This will output a tuple of tensors: (tensor1, tensor2, tensor3, tensor4). The shapes of each tensor (after going through a rendering block to reduce the channel length to 3)) is tensor1=(16, 32, 32, 3), tensor2=(8, 64, 64, 3), tensor3=(4, 128, 128, 3), tensor4=(2, 256, 256, 3). As you can see, as you go from tensor1 to tensor4 the frame number gets halved each time while the resolution doubles. The real video examples also get split up into 4 sub-video tensors of the same shape. These sub-videos are what are fed into the discriminator. Now the functionality that halves the frame length is called sub-sampling. How the function works is that it starts at either the first or second frame (this is supposed to be random) and then selects every other frame. There is a sub-sample function in both the Videodataset class (which takes the real videos and generates 4 sub-video tensors) and in the Generator class. The Videodataset class outputs 4-D tensors (T, H, W, C), while the Generator class outputs 5 because it has a batch dimension N.
This is the sub-sample function in the VideoDataset class:
 def sub_sample(self, x, frame=2): original_shape = x.shape # Logging original shape offset = 0 begin = [offset, 0, 0, 0] # start from index 'offset' in the frame dimension end = [original_shape[0], original_shape[1], original_shape[2], original_shape[3]] strides = [frame, 1, 1, 1] # step 'frame' in the Frame dimension x = tf.strided_slice(x, begin, end, strides) expected_frames = (original_shape[0]) // frame #print(f"VD Expected frames after sub-sampling: {expected_frames}, Actual frames: {x.shape[0]}") if x.shape[0] != expected_frames: raise ValueError(f"Expected frames: {expected_frames}, but got {x.shape[0]}") return x 
This is the sub-sample function in the Generator class:
 def sub_sample(self, x, frame=2): original_shape = x.shape # Logging original shape offset = 0 # begin = [0, offset, 0, 0, 0] # start from index 'offset' in the second dimension end = [original_shape[0], original_shape[1], original_shape[2], original_shape[3], original_shape[4]] strides = [1, frame, 1, 1, 1] # step 'frame' in the second dimension x = tf.strided_slice(x, begin, end, strides) expected_frames = (original_shape[1]) // frame #print(f"Gen Expected frames after sub-sampling: {expected_frames}, Actual frames: {x.shape[1]}") if x.shape[1] != expected_frames: raise ValueError(f"Expected frames: {expected_frames}, but got {x.shape[1]}") return x 
You'll notice I am using tf.strided_slice(). I originally tried slicing/sub-sampling using the same notation you would do for slicing a numpy array: x = x[:,offset::frame,:,:,:]. I changed it because I thought maybe that was causing some sort of issue.
Below is a block diagram of the Generator and VideoDataset (labeled "Dataset" in the block diagram) functionalities.
https://preview.redd.it/2vh7yx2g09xc1.png?width=1862&format=png&auto=webp&s=143d5c4c8df91fc71b9da1d3858feaae28c4605a
A point of note about the block diagram, the outputs of Dataset are NOT combined with the outputs of the Generator, as might be mistakenly deduced based on the drawing. The discriminator outputs predictions on the Generator outputs and the Dataset outputs separately.
I don't think this issue is happening in the backward pass because I put in a bunch of print statements and based on those print statements the error does not occur in the middle of a gradient calculation or backward pass.
My Dataloader and VideoDataset class
Below is how I am actually fetching data from my VideoDataset class:
 #Create dataloader dataset = VideoDataset(directory) dataloader = tf.data.Dataset.from_generator( lambda: iter(dataset), # Corrected to use iter() to clearly return an iterator from the dataset output_signature=( tf.TensorSpec(shape=(16, 32, 32, 3), dtype=tf.float32), tf.TensorSpec(shape=(8, 64, 64, 3), dtype=tf.float32), tf.TensorSpec(shape=(4, 128, 128, 3), dtype=tf.float32), tf.TensorSpec(shape=(2, 256, 256, 3), dtype=tf.float32) ) ).batch(batch_size) 
and here is my VideoDataset class:
class VideoDataset(): def __init__(self, directory, fraction=0.2, sub_sample_rate=2): print("Initializing VD") = directory self.fraction = fraction self.sub_sample_rate = sub_sample_rate all_files = [os.path.join(self.directory, file) for file in os.listdir(self.directory)] valid_files = [] for file in all_files: try: # Read the serialized tensor from file serialized_tensor = tf.io.read_file(file) # Deserialize the tensor tensor = tf.io.parse_tensor(serialized_tensor, out_type=tf.float32) # Adjust dtype if necessary # Validate the shape of the tensor if tensor.shape == (16, 256, 256, 3): valid_files.append(file) except Exception as e: print(f"Error loading file {file}: {e}") # Randomly select a fraction of the valid files selected_file_count = int(len(valid_files) * fraction) print(f"Selected {selected_file_count} files") self.files = random.sample(valid_files, selected_file_count) def sub_sample(self, x, frame=2): original_shape = x.shape # Logging original shape offset = 0 begin = [offset, 0, 0, 0] # start from index 'offset' in the frame dimension end = [original_shape[0], original_shape[1], original_shape[2], original_shape[3]] strides = [frame, 1, 1, 1] # step 'frame' in the Frame dimension x = tf.strided_slice(x, begin, end, strides) expected_frames = (original_shape[0]) // frame #print(f"VD Expected frames after sub-sampling: {expected_frames}, Actual frames: {x.shape[0]}") if x.shape[0] != expected_frames: raise ValueError(f"Expected frames: {expected_frames}, but got {x.shape[0]}") return x def pooling(self, x, ksize): if ksize == 1: return x T, H, W, C = x.shape Hd = H // ksize Wd = W // ksize # Reshape the tensor to merge the spatial dimensions into the pooling blocks x_reshaped = tf.reshape(x, (T, Hd, ksize, Wd, ksize, C)) # Take the mean across the dimensions 3 and 5, which are the spatial dimensions within each block pooled_x = tf.reduce_mean(x_reshaped, axis=[2, 4]) return pooled_x def __len__(self): return len(self.files) def __getitem__(self, idx): #print("Calling VD getitem method") serialized_tensor = tf.io.read_file(self.files[idx]) video_tensor = tf.io.parse_tensor(serialized_tensor, out_type=tf.float32) x1 = video_tensor x2 = self.sub_sample(x1) x3 = self.sub_sample(x2) x4 = self.sub_sample(x3) #print("\n") x1 = self.pooling(x1, 8) x2 = self.pooling(x2, 4) x3 = self.pooling(x3, 2) #print(f"Shapes of VD output = {x1.shape}, {x2.shape}, {x3.shape}, {x4.shape}") return (x1, x2, x3, x4) def __iter__(self): print(f"Calling VD iter method, len self = {len(self)}") #Make the dataset iterable, allowing it to be used directly with tf.data.Dataset.from_generator. for idx in range(len(self)): yield self[idx]self.directory 
The issue is happening at one point when the dataloader is fetching examples from Videodataset in my opinion, I just can't figure out what is causing it.
TLDR
I am using a runpod VM with an NVIDIA A100 GPU. I am trying to train a GAN that outputs 2 second long gifs that are made up fo 16 frames. One of the training step involves splitting the output video (either real or fake) into 4 sub videos of different frame length and resolution. The reduction of frames is achieve by a sub-sample function (which you can find earlier in my post, it is bolded) that starts at the first or second frame of the video (random) and then selects every other frame, so it halves the frames. So I am essentially doing a strided slice on a tensor, and I am using tf.strided_slice(). I tried using regular slicing notation (like you would use in NumPy), and I get the same error. The weird thing about this is that the issue does NOT come up immediately in training and is dependent on batch size. The training goes through several batch iterations just fine (and sometimes some epochs) with a batch size of 16. If I lower the batch size to 8 it's absle to go thorugh even more iterations, even up to 5 epochs (I didn't test it for longer), although the outputs are not the outputs I would expect after some epochs (I expect a specific type of noisy image based on how this model ran in PyTorch and Chainer frameworks, but I instead get a video that's mostly just a black blob through most of the resolution, just a bit of color on the edges). If I go down to a batch size of 4 the issue goes away mostly. See below for the error I am seeing:
Error:
Expected begin and size arguments to be 1-D tensors of size 2, but got shapes [4] and [2] instead. [Op:StridedSlice]
submitted by Titty_Slicer_5000 to MLQuestions [link] [comments]


2024.04.28 22:37 Hajeia NEED ADVICE: Gem Stalker Redesign for 2nd Level Party

Hi everyone,
I am unsure how and if I can add images - so sry if there are none to better understand what I'm talking about. I am happy to share the images of the map and stat block in some way if I'm told how :D

TL;DR

I find it really hard to create balanced combat encounters, especially those that are challenging but doable and in the end rewarding (like with boss fights). Therefore I need help adjusting the difficulty of my final boss and the fight for a rather new group of Level 2 players. I used a self-made homebrew version of a GEM STALKER with some adjusted Stats and Lair Actions (s. below). Do you think that is balanced, also given the terrain? Or would you change something to make it fairer?

PARTY

I am currently planning out the final battle for a Homebrew One Shot Adventure for 1st time players and need help with balancing the final boss for my party (haven't done a lot of homebrew monsters so far). It will be their second session but they're doing very well, in terms of understanding the rules, using their abilities and with roleplaying (even in combat).
We're using some simplified rules, the amazing DnD Story Mode made by u/joelesko , which I adjusted myself in terms of adding some more races, an additional feature specific to a kind of subclass and making the druid and ranger class a bit more unique, nothing too major though, that should impact the balance of the rule set too much (at least I hope so xD so far it felt quite balanced). Here's the Class Reference Sheets and Character Creation Guide I made for my players if anyone wants to have a look and have a reference how the characters were build. (Unfortunately my guides are all in German as my players don't speak English very well but maybe you can still gleam the most important information from it as many features are kept in English (for simplicity of referencing) - sry about that)
The party consists of 5 players, each Level 2:
I was thinking of maybe leveling them up to Level 3 before the boss fight to make them a bit stronger and more resistant, but am unsure if that's the way to go as they already leveled up once during the last session (after their first fight and completing the first part of the adventure) and I don't want to overwhelm them with options. Or should I just give them like a boost in abilities, I'd have a way to make that plausible in game (same for the level up)

MONSTER: GEM STALKER

Generally they will be fighting a cursed amethyst dragon (called Belayana) with a stat block based on that of an Amethyst Gem Stalker, that is trying to protect a magic tree that gives the surrounding forest and the beings within life energy. They can either kill it (and let the forest die but get the "Quest" money), knock it unconscious and help Belayana to get back to her true Dragon form, by uniting her body with that of the gem tree, or (with a lot of clever thinking and luck) might be able to persuade it to stand down and find a different way.
Belayana will be rather aggressive when fighting as she sees everyone in the cave as a danger to the tree and the forest, and is intend on protecting it no matter what. She will use walls and water and her teleport to her advantage to attack and move around. Although she is turned into a monster, she is still intelligent and acts like it but will fight to the death.
I adjusted the Stat Block of the Gem Stalker with the 5etools integrated CR adjustments in terms of damage and abilities to a CR 2. Below is everything I changed or added (indicated by a +)
I was thinking of either modifying a mephit (which one though?) to a gem flavour to use as a minion AND/OR use the Small Earth Elemental (adjusted to only one attack per turn) created by u/Kankerata. I usually like using minions, because, imo, then the fight gains a better action economy instead of just circling and pummeling the big bad.
MAP
For the sake of completion and better calculate what difficulty is appropriate:
I used this map by u/FantastiskDoD as a general base and adjusted it to my amethyst setting for Foundry VTT. The map is ~190ft in length and 160ft top to bottom (5ft grid). There are two bodies of water (left and right) and some of the crystals on the ground can be used to hide behind. The big gemstones inside the right lake is a stand-in for the big magical gem tree, that fuels the magic of the forest and of Belayana. There are Stone slips inside the left lake that can be traversed by jumping and if necessary an Athletics Check, but are considered difficult terrain as they are wet and slippery. A hidden entrance through a waterfall is the primary way into the cave. The ground inbetween the lakes is about 30ft wide at its smalles part.

Thanks in advance for any help and ideas :D I am happy to answer any questions that might arise or post this somewhere else, where it is more fitting - I am aware it's a lot of information and not the best presented. I just don't know what would be important in that regard >.<
submitted by Hajeia to DMAcademy [link] [comments]


2024.04.28 22:37 Titty_Slicer_5000 Tensorflow Strided Slice Error. Need help.

Tensorflow Strided Slice Error. Need help.
TLDR at the bottom
My Full Tensorflow Code: Link. Please excuse all the different commented out parts of code, I've had a long road of trouble shooting this code.
Hardware and Software Setup
-Virtual Machine on Runpod
-NVIDIA A100 GPU
-Tensorflow 2.15
-CUDA 12.2
-cuDNN 8.9
What I'm doing and the issue I'm facing
I am trying to creating a visual generator AI, and to that end I am trying to implement the TGANv2 architecture in Tensorflow. The TGANv2 model I am following was originally written in Chainer by some researchers. I also implemented it in Pytorch (here is my PyTorch code if you are interested) and also ran it in Chainer. It works fine in both. But when I try to implement it in Tensorflow I start running into this error:
Traceback (most recent call last): File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/ops/script_ops.py", line 270, in __call__ ret = func(*args) ^^^^^^^^^^^ File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/autograph/impl/api.py", line 643, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/data/ops/from_generator_op.py", line 198, in generator_py_func values = next(generator_state.get_iterator(iterator_id)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/3TF-TGANv2.py", line 140, in __iter__ yield self[idx] ~~~~^^^^^ File "/workspace/3TF-TGANv2.py", line 126, in __getitem__ x2 = self.sub_sample(x1) ^^^^^^^^^^^^^^^^^^^ File "/workspace/3TF-TGANv2.py", line 99, in sub_sample x = tf.strided_slice(x, begin, end, strides) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler raise e.with_traceback(filtered_tb) from None File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/eageexecute.py", line 59, in quick_execute except TypeError as e: tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__StridedSlice_device_/job:localhost/replica:0/task:0/device:GPU:0}} Expected begin and size arguments to be 1-D tensors of size 2, but got shapes [4] and [2] instead. [Op:StridedSlice] 
What's important to note about this issue is that it does not come up right away. It can go through dozens of batches before this issue pops up. This error was generated with a batch size of 16, but if I lower my batch size to 8 I can even get it to run for 5 epochs (longest I've tried). The outputs of the Generator are not what I saw with Chainer or Pytorch after 5 epochs (it's mostly just videos of a giant black blob), though I am unsure if this is related to the issue. So with a batch size of 8 sometimes the issue comes up and sometimes it doesn't. If I lower the batch size to 4, the issue almost never comes up. The fact that this is batch size driven really perplexes me. I've tried it with multiple different GPUs.
Description of relevant parts of model and code
The way the Generator works is as follows. There is a CLSTM layer that generates 16 features maps that have a 4x4 resolution and 1024 channels each. Each feature map corresponds to a frame of the output video (the output video has 16 frames and runs at 8fps, so it's a 2 second long gif).
During inference each feature map passes through 6 upsampling blocks, with each upsampling block doubling the resolution and halving the channels. So after 6 blocks the shape of each frame is (256, 256, 16), so it has a 256p resolution and 16 channels. Each frame then gets rendered by a rendering block to render it into a 3-channel image, of shape (256, 256, 3). So the final shape of the output video is (16, 256, 256, 3) = (T, H, W, C), where T is the number of frame, H is the height, W the width, and C the number of channels. This output is a single tensor.
During training the setup is a bit different. The generated output video will be split up into 4 "sub-videos", each of varying resolution and frames. This will output a tuple of tensors: (tensor1, tensor2, tensor3, tensor4). The shapes of each tensor (after going through a rendering block to reduce the channel length to 3)) is tensor1=(16, 32, 32, 3), tensor2=(8, 64, 64, 3), tensor3=(4, 128, 128, 3), tensor4=(2, 256, 256, 3). As you can see, as you go from tensor1 to tensor4 the frame number gets halved each time while the resolution doubles. The real video examples also get split up into 4 sub-video tensors of the same shape. These sub-videos are what are fed into the discriminator. Now the functionality that halves the frame length is called sub-sampling. How the function works is that it starts at either the first or second frame (this is supposed to be random) and then selects every other frame. There is a sub-sample function in both the Videodataset class (which takes the real videos and generates 4 sub-video tensors) and in the Generator class. The Videodataset class outputs 4-D tensors (T, H, W, C), while the Generator class outputs 5 because it has a batch dimension N.
This is the sub-sample function in the VideoDataset class:
 def sub_sample(self, x, frame=2): original_shape = x.shape # Logging original shape offset = 0 begin = [offset, 0, 0, 0] # start from index 'offset' in the frame dimension end = [original_shape[0], original_shape[1], original_shape[2], original_shape[3]] strides = [frame, 1, 1, 1] # step 'frame' in the Frame dimension x = tf.strided_slice(x, begin, end, strides) expected_frames = (original_shape[0]) // frame #print(f"VD Expected frames after sub-sampling: {expected_frames}, Actual frames: {x.shape[0]}") if x.shape[0] != expected_frames: raise ValueError(f"Expected frames: {expected_frames}, but got {x.shape[0]}") return x 
This is the sub-sample function in the Generator class:
 def sub_sample(self, x, frame=2): original_shape = x.shape # Logging original shape offset = 0 # begin = [0, offset, 0, 0, 0] # start from index 'offset' in the second dimension end = [original_shape[0], original_shape[1], original_shape[2], original_shape[3], original_shape[4]] strides = [1, frame, 1, 1, 1] # step 'frame' in the second dimension x = tf.strided_slice(x, begin, end, strides) expected_frames = (original_shape[1]) // frame #print(f"Gen Expected frames after sub-sampling: {expected_frames}, Actual frames: {x.shape[1]}") if x.shape[1] != expected_frames: raise ValueError(f"Expected frames: {expected_frames}, but got {x.shape[1]}") return x 
You'll notice I am using tf.strided_slice(). I originally tried slicing/sub-sampling using the same notation you would do for slicing a numpy array: x = x[:,offset::frame,:,:,:]. I changed it because I thought maybe that was causing some sort of issue.
Below is a block diagram of the Generator and VideoDataset (labeled "Dataset" in the block diagram) functionalities.
https://preview.redd.it/2vh7yx2g09xc1.png?width=1862&format=png&auto=webp&s=143d5c4c8df91fc71b9da1d3858feaae28c4605a
A point of note about the block diagram, the outputs of Dataset are NOT combined with the outputs of the Generator, as might be mistakenly deduced based on the drawing. The discriminator outputs predictions on the Generator outputs and the Dataset outputs separately.
I don't think this issue is happening in the backward pass because I put in a bunch of print statements and based on those print statements the error does not occur in the middle of a gradient calculation or backward pass.
My Dataloader and VideoDataset class
Below is how I am actually fetching data from my VideoDataset class:
 #Create dataloader dataset = VideoDataset(directory) dataloader = tf.data.Dataset.from_generator( lambda: iter(dataset), # Corrected to use iter() to clearly return an iterator from the dataset output_signature=( tf.TensorSpec(shape=(16, 32, 32, 3), dtype=tf.float32), tf.TensorSpec(shape=(8, 64, 64, 3), dtype=tf.float32), tf.TensorSpec(shape=(4, 128, 128, 3), dtype=tf.float32), tf.TensorSpec(shape=(2, 256, 256, 3), dtype=tf.float32) ) ).batch(batch_size) 
and here is my VideoDataset class:
class VideoDataset(): def __init__(self, directory, fraction=0.2, sub_sample_rate=2): print("Initializing VD") = directory self.fraction = fraction self.sub_sample_rate = sub_sample_rate all_files = [os.path.join(self.directory, file) for file in os.listdir(self.directory)] valid_files = [] for file in all_files: try: # Read the serialized tensor from file serialized_tensor = tf.io.read_file(file) # Deserialize the tensor tensor = tf.io.parse_tensor(serialized_tensor, out_type=tf.float32) # Adjust dtype if necessary # Validate the shape of the tensor if tensor.shape == (16, 256, 256, 3): valid_files.append(file) except Exception as e: print(f"Error loading file {file}: {e}") # Randomly select a fraction of the valid files selected_file_count = int(len(valid_files) * fraction) print(f"Selected {selected_file_count} files") self.files = random.sample(valid_files, selected_file_count) def sub_sample(self, x, frame=2): original_shape = x.shape # Logging original shape offset = 0 begin = [offset, 0, 0, 0] # start from index 'offset' in the frame dimension end = [original_shape[0], original_shape[1], original_shape[2], original_shape[3]] strides = [frame, 1, 1, 1] # step 'frame' in the Frame dimension x = tf.strided_slice(x, begin, end, strides) expected_frames = (original_shape[0]) // frame #print(f"VD Expected frames after sub-sampling: {expected_frames}, Actual frames: {x.shape[0]}") if x.shape[0] != expected_frames: raise ValueError(f"Expected frames: {expected_frames}, but got {x.shape[0]}") return x def pooling(self, x, ksize): if ksize == 1: return x T, H, W, C = x.shape Hd = H // ksize Wd = W // ksize # Reshape the tensor to merge the spatial dimensions into the pooling blocks x_reshaped = tf.reshape(x, (T, Hd, ksize, Wd, ksize, C)) # Take the mean across the dimensions 3 and 5, which are the spatial dimensions within each block pooled_x = tf.reduce_mean(x_reshaped, axis=[2, 4]) return pooled_x def __len__(self): return len(self.files) def __getitem__(self, idx): #print("Calling VD getitem method") serialized_tensor = tf.io.read_file(self.files[idx]) video_tensor = tf.io.parse_tensor(serialized_tensor, out_type=tf.float32) x1 = video_tensor x2 = self.sub_sample(x1) x3 = self.sub_sample(x2) x4 = self.sub_sample(x3) #print("\n") x1 = self.pooling(x1, 8) x2 = self.pooling(x2, 4) x3 = self.pooling(x3, 2) #print(f"Shapes of VD output = {x1.shape}, {x2.shape}, {x3.shape}, {x4.shape}") return (x1, x2, x3, x4) def __iter__(self): print(f"Calling VD iter method, len self = {len(self)}") #Make the dataset iterable, allowing it to be used directly with tf.data.Dataset.from_generator. for idx in range(len(self)): yield self[idx]self.directory 
The issue is happening at one point when the dataloader is fetching examples from Videodataset in my opinion, I just can't figure out what is causing it.
TLDR
I am using a runpod VM with an NVIDIA A100 GPU. I am trying to train a GAN that outputs 2 second long gifs that are made up fo 16 frames. One of the training step involves splitting the output video (either real or fake) into 4 sub videos of different frame length and resolution. The reduction of frames is achieve by a sub-sample function (which you can find earlier in my post, it is bolded) that starts at the first or second frame of the video (random) and then selects every other frame, so it halves the frames. So I am essentially doing a strided slice on a tensor, and I am using tf.strided_slice(). I tried using regular slicing notation (like you would use in NumPy), and I get the same error. The weird thing about this is that the issue does NOT come up immediately in training and is dependent on batch size. The training goes through several batch iterations just fine (and sometimes some epochs) with a batch size of 16. If I lower the batch size to 8 it's absle to go thorugh even more iterations, even up to 5 epochs (I didn't test it for longer), although the outputs are not the outputs I would expect after some epochs (I expect a specific type of noisy image based on how this model ran in PyTorch and Chainer frameworks, but I instead get a video that's mostly just a black blob through most of the resolution, just a bit of color on the edges). If I go down to a batch size of 4 the issue goes away mostly. See below for the error I am seeing:
Error:
Expected begin and size arguments to be 1-D tensors of size 2, but got shapes [4] and [2] instead. [Op:StridedSlice]
submitted by Titty_Slicer_5000 to learnmachinelearning [link] [comments]


2024.04.28 22:37 Titty_Slicer_5000 [Project] Tensorflow Strided Slice Error. Need help.

[Project] Tensorflow Strided Slice Error. Need help.
TLDR at the bottom
My Full Tensorflow Code: Link. Please excuse all the different commented out parts of code, I've had a long road of trouble shooting this code.
Hardware and Software Setup
-Virtual Machine on Runpod
-NVIDIA A100 GPU
-Tensorflow 2.15
-CUDA 12.2
-cuDNN 8.9
What I'm doing and the issue I'm facing
I am trying to creating a visual generator AI, and to that end I am trying to implement the TGANv2 architecture in Tensorflow. The TGANv2 model I am following was originally written in Chainer by some researchers. I also implemented it in Pytorch (here is my PyTorch code if you are interested) and also ran it in Chainer. It works fine in both. But when I try to implement it in Tensorflow I start running into this error:
Traceback (most recent call last): File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/ops/script_ops.py", line 270, in __call__ ret = func(*args) ^^^^^^^^^^^ File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/autograph/impl/api.py", line 643, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/data/ops/from_generator_op.py", line 198, in generator_py_func values = next(generator_state.get_iterator(iterator_id)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/3TF-TGANv2.py", line 140, in __iter__ yield self[idx] ~~~~^^^^^ File "/workspace/3TF-TGANv2.py", line 126, in __getitem__ x2 = self.sub_sample(x1) ^^^^^^^^^^^^^^^^^^^ File "/workspace/3TF-TGANv2.py", line 99, in sub_sample x = tf.strided_slice(x, begin, end, strides) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler raise e.with_traceback(filtered_tb) from None File "/root/anaconda3/envs/tf_gpu/lib/python3.11/site-packages/tensorflow/python/eageexecute.py", line 59, in quick_execute except TypeError as e: tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__StridedSlice_device_/job:localhost/replica:0/task:0/device:GPU:0}} Expected begin and size arguments to be 1-D tensors of size 2, but got shapes [4] and [2] instead. [Op:StridedSlice] 
What's important to note about this issue is that it does not come up right away. It can go through dozens of batches before this issue pops up. This error was generated with a batch size of 16, but if I lower my batch size to 8 I can even get it to run for 5 epochs (longest I've tried). The outputs of the Generator are not what I saw with Chainer or Pytorch after 5 epochs (it's mostly just videos of a giant black blob), though I am unsure if this is related to the issue. So with a batch size of 8 sometimes the issue comes up and sometimes it doesn't. If I lower the batch size to 4, the issue almost never comes up. The fact that this is batch size driven really perplexes me. I've tried it with multiple different GPUs.
Description of relevant parts of model and code
The way the Generator works is as follows. There is a CLSTM layer that generates 16 features maps that have a 4x4 resolution and 1024 channels each. Each feature map corresponds to a frame of the output video (the output video has 16 frames and runs at 8fps, so it's a 2 second long gif).
During inference each feature map passes through 6 upsampling blocks, with each upsampling block doubling the resolution and halving the channels. So after 6 blocks the shape of each frame is (256, 256, 16), so it has a 256p resolution and 16 channels. Each frame then gets rendered by a rendering block to render it into a 3-channel image, of shape (256, 256, 3). So the final shape of the output video is (16, 256, 256, 3) = (T, H, W, C), where T is the number of frame, H is the height, W the width, and C the number of channels. This output is a single tensor.
During training the setup is a bit different. The generated output video will be split up into 4 "sub-videos", each of varying resolution and frames. This will output a tuple of tensors: (tensor1, tensor2, tensor3, tensor4). The shapes of each tensor (after going through a rendering block to reduce the channel length to 3)) is tensor1=(16, 32, 32, 3), tensor2=(8, 64, 64, 3), tensor3=(4, 128, 128, 3), tensor4=(2, 256, 256, 3). As you can see, as you go from tensor1 to tensor4 the frame number gets halved each time while the resolution doubles. The real video examples also get split up into 4 sub-video tensors of the same shape. These sub-videos are what are fed into the discriminator. Now the functionality that halves the frame length is called sub-sampling. How the function works is that it starts at either the first or second frame (this is supposed to be random) and then selects every other frame. There is a sub-sample function in both the Videodataset class (which takes the real videos and generates 4 sub-video tensors) and in the Generator class. The Videodataset class outputs 4-D tensors (T, H, W, C), while the Generator class outputs 5 because it has a batch dimension N.
This is the sub-sample function in the VideoDataset class:
 def sub_sample(self, x, frame=2): original_shape = x.shape # Logging original shape offset = 0 begin = [offset, 0, 0, 0] # start from index 'offset' in the frame dimension end = [original_shape[0], original_shape[1], original_shape[2], original_shape[3]] strides = [frame, 1, 1, 1] # step 'frame' in the Frame dimension x = tf.strided_slice(x, begin, end, strides) expected_frames = (original_shape[0]) // frame #print(f"VD Expected frames after sub-sampling: {expected_frames}, Actual frames: {x.shape[0]}") if x.shape[0] != expected_frames: raise ValueError(f"Expected frames: {expected_frames}, but got {x.shape[0]}") return x 
This is the sub-sample function in the Generator class:
 def sub_sample(self, x, frame=2): original_shape = x.shape # Logging original shape offset = 0 # begin = [0, offset, 0, 0, 0] # start from index 'offset' in the second dimension end = [original_shape[0], original_shape[1], original_shape[2], original_shape[3], original_shape[4]] strides = [1, frame, 1, 1, 1] # step 'frame' in the second dimension x = tf.strided_slice(x, begin, end, strides) expected_frames = (original_shape[1]) // frame #print(f"Gen Expected frames after sub-sampling: {expected_frames}, Actual frames: {x.shape[1]}") if x.shape[1] != expected_frames: raise ValueError(f"Expected frames: {expected_frames}, but got {x.shape[1]}") return x 
You'll notice I am using tf.strided_slice(). I originally tried slicing/sub-sampling using the same notation you would do for slicing a numpy array: x = x[:,offset::frame,:,:,:]. I changed it because I thought maybe that was causing some sort of issue.
Below is a block diagram of the Generator and VideoDataset (labeled "Dataset" in the block diagram) functionalities.
https://preview.redd.it/2vh7yx2g09xc1.png?width=1862&format=png&auto=webp&s=143d5c4c8df91fc71b9da1d3858feaae28c4605a
A point of note about the block diagram, the outputs of Dataset are NOT combined with the outputs of the Generator, as might be mistakenly deduced based on the drawing. The discriminator outputs predictions on the Generator outputs and the Dataset outputs separately.
I don't think this issue is happening in the backward pass because I put in a bunch of print statements and based on those print statements the error does not occur in the middle of a gradient calculation or backward pass.
My Dataloader and VideoDataset class
Below is how I am actually fetching data from my VideoDataset class:
 #Create dataloader dataset = VideoDataset(directory) dataloader = tf.data.Dataset.from_generator( lambda: iter(dataset), # Corrected to use iter() to clearly return an iterator from the dataset output_signature=( tf.TensorSpec(shape=(16, 32, 32, 3), dtype=tf.float32), tf.TensorSpec(shape=(8, 64, 64, 3), dtype=tf.float32), tf.TensorSpec(shape=(4, 128, 128, 3), dtype=tf.float32), tf.TensorSpec(shape=(2, 256, 256, 3), dtype=tf.float32) ) ).batch(batch_size) 
and here is my VideoDataset class:
class VideoDataset(): def __init__(self, directory, fraction=0.2, sub_sample_rate=2): print("Initializing VD") self.directory = directory self.fraction = fraction self.sub_sample_rate = sub_sample_rate all_files = [os.path.join(self.directory, file) for file in os.listdir(self.directory)] valid_files = [] for file in all_files: try: # Read the serialized tensor from file serialized_tensor = tf.io.read_file(file) # Deserialize the tensor tensor = tf.io.parse_tensor(serialized_tensor, out_type=tf.float32) # Adjust dtype if necessary # Validate the shape of the tensor if tensor.shape == (16, 256, 256, 3): valid_files.append(file) except Exception as e: print(f"Error loading file {file}: {e}") # Randomly select a fraction of the valid files selected_file_count = int(len(valid_files) * fraction) print(f"Selected {selected_file_count} files") self.files = random.sample(valid_files, selected_file_count) def sub_sample(self, x, frame=2): original_shape = x.shape # Logging original shape offset = 0 begin = [offset, 0, 0, 0] # start from index 'offset' in the frame dimension end = [original_shape[0], original_shape[1], original_shape[2], original_shape[3]] strides = [frame, 1, 1, 1] # step 'frame' in the Frame dimension x = tf.strided_slice(x, begin, end, strides) expected_frames = (original_shape[0]) // frame #print(f"VD Expected frames after sub-sampling: {expected_frames}, Actual frames: {x.shape[0]}") if x.shape[0] != expected_frames: raise ValueError(f"Expected frames: {expected_frames}, but got {x.shape[0]}") return x def pooling(self, x, ksize): if ksize == 1: return x T, H, W, C = x.shape Hd = H // ksize Wd = W // ksize # Reshape the tensor to merge the spatial dimensions into the pooling blocks x_reshaped = tf.reshape(x, (T, Hd, ksize, Wd, ksize, C)) # Take the mean across the dimensions 3 and 5, which are the spatial dimensions within each block pooled_x = tf.reduce_mean(x_reshaped, axis=[2, 4]) return pooled_x def __len__(self): return len(self.files) def __getitem__(self, idx): #print("Calling VD getitem method") serialized_tensor = tf.io.read_file(self.files[idx]) video_tensor = tf.io.parse_tensor(serialized_tensor, out_type=tf.float32) x1 = video_tensor x2 = self.sub_sample(x1) x3 = self.sub_sample(x2) x4 = self.sub_sample(x3) #print("\n") x1 = self.pooling(x1, 8) x2 = self.pooling(x2, 4) x3 = self.pooling(x3, 2) #print(f"Shapes of VD output = {x1.shape}, {x2.shape}, {x3.shape}, {x4.shape}") return (x1, x2, x3, x4) def __iter__(self): print(f"Calling VD iter method, len self = {len(self)}") #Make the dataset iterable, allowing it to be used directly with tf.data.Dataset.from_generator. for idx in range(len(self)): yield self[idx] 
The issue is happening at one point when the dataloader is fetching examples from Videodataset in my opinion, I just can't figure out what is causing it.
TLDR
I am using a runpod VM with an NVIDIA A100 GPU. I am trying to train a GAN that outputs 2 second long gifs that are made up fo 16 frames. One of the training step involves splitting the output video (either real or fake) into 4 sub videos of different frame length and resolution. The reduction of frames is achieve by a sub-sample function (which you can find earlier in my post, it is bolded) that starts at the first or second frame of the video (random) and then selects every other frame, so it halves the frames. So I am essentially doing a strided slice on a tensor, and I am using tf.strided_slice(). I tried using regular slicing notation (like you would use in NumPy), and I get the same error. The weird thing about this is that the issue does NOT come up immediately in training and is dependent on batch size. The training goes through several batch iterations just fine (and sometimes some epochs) with a batch size of 16. If I lower the batch size to 8 it's absle to go thorugh even more iterations, even up to 5 epochs (I didn't test it for longer), although the outputs are not the outputs I would expect after some epochs (I expect a specific type of noisy image based on how this model ran in PyTorch and Chainer frameworks, but I instead get a video that's mostly just a black blob through most of the resolution, just a bit of color on the edges). If I go down to a batch size of 4 the issue goes away mostly. See below for the error I am seeing:
Error:
Expected begin and size arguments to be 1-D tensors of size 2, but got shapes [4] and [2] instead. [Op:StridedSlice]
submitted by Titty_Slicer_5000 to MachineLearning [link] [comments]


2024.04.28 21:12 Repulsive_Union2244 [[FOR HIRE]] -- Pay Someone to Take My Statistics Exam For Me Reddit -- Take My Statistics Test For Me -- Do My Statistics Exam -- Statistics Exam Taker -- Pay Someone to Take My Online Statistics Class For Me -- Pay Someone to Take My Statistics Class For Me -- MATH: Probability Algebra MyStatLab

First of all, these are the contact details to reach us for help any type of academic task of any subject:
MY CONTACT INFO:
WhatsApp: +1 (213) 594-5657
Call: +1 727 456 9641
Website: hiraedu. com
Email: info@hiraedu. com
ASSESSMENTS I CAN COMPLETE:
MY MATH SUBJECTS OF EXPERTISE:
I am very knowledgeable and proficient in assisting students in a wide range of mathematics classes. I can help students complete their homework assignments and other projects get an A on quizzes, tests, and exams (including proctored assessments) answer online discussion posts write essays & papers in MLA APA Chicago format and provide general overall academic help in each math course listed below:
STATISTICS HELP (MY BEST SUBJECT):
ALGEBRA HELP:
CALCULUS HELP:
ATTRIBUTES THAT SET ME APART FROM OTHER TUTORS:
I CAN AID STUDENTS TAKING PROCTORED ASSESSMENTS:
I CAN VERIFY MY ACADEMIC KNOWLEDGE & SKILLS:
I HAVE PAID ACCESS TO OVER 15 STUDY-HELP WEBSITES AND MATHEMATICAL SOFTWARE:
I ALWAYS ACCEPT CALLS:
I WRITE LIKE A PROFESSIONAL:
MY EDUCATIONAL SOFTWARE OF EXPERTISE:
SCHOOLS FROM WHICH I'VE HELPED STUDENTS IN :
As of 2021, I have tutored and helped students enrolled at the following U.S. universities community colleges county & city colleges schools for-profit institutions listed below in alphabetical order:
I OFFER FLEXIBLE PAYMENT PLANS:
HELP AVAILABLE FOR OTHER SUBJECTS:
THE OBLIGATORY "IS THIS A SCAM?" QUESTION:
Considering the fact that you found my contact information online, it’s understandable to be skeptical regarding the legitimacy of my services. Therefore, I’m willing to do all of the following to help you feel more secure in trusting me with your academic needs:
HOW TO CONTACT ME:
CONCLUSION:
OCT 2021 UPDATE: I am currently offering discount deals for requests for assistance with completing a student's entire course for the Fall 2024 semester (14 - 20 week courses acceptable), as well as discounts for students seeking help with multiple exams and/or multiple classes for Fall 2024. My availability for the Autumn 2024 / Fall 2024 semester will likely become limited very quickly as I receive more and more academic requests. Therefore it would be very advantageous to reach out to me for academic assistance before my schedule becomes too full.
MY CONTACT INFO:
WhatsApp: +1 (213) 594-5657
Call: +1 727 456 9641
Website: hiraedu. com
Email: info@hiraedu. com
IMPORTANT: When reaching out, please try to include the following information in the initial text message or email so that I can have all the important details necessary to determine the rate for my services:
submitted by Repulsive_Union2244 to Statisticshelpers_ [link] [comments]


2024.04.28 21:10 Ok-Train-9067 EU Nembus.nl - 5x Solo/Duo/Trio/QuadLoot X5TPKitsJUST WIPED

🌐 Welcome to Nembus.nl

In addition to this exciting fraction of our events, we offer a variety of plugins to enhance your gaming experience even further:
🔒 Rust+ Raid Notifications: Get notifications on your phone when you are getting raided.
🔑 AutoCodeLock and AutomaticAuthorisation: Simplify the management of your base with automatic code locks and authorisations.
🏠 BuildingSkins: Customise the look of your base with a variety of building skins and make it a real eye-catcher.
💼 On our Nembus server, we emphasise clear rules and transparent administration. Our dedicated team is available for questions and concerns to ensure that every player has a positive experience. 🛠️👥
🌟Experience the thrill of Rust in an environment built on stability and integrity. From exciting PvP battles to collaborative projects, AfiA offers a balanced gaming experience for those who want to master the challenges of the rustic world. 🌍🏞️
🔗 Join us and become part of the Rust legend on AfiA. Not only rust is created here - history is made here! 💪🌟
🌐 Server IP: [client.connect play.nembus.nl:28015]
or under Modded Server: EU Nembus.nl - 5x Solo/Duo/Trio/QuadLoot X5TPKitsJUST WIPED
🌐 Discord: nembus.nl/discord
🤝 Join and become part of a solid, growing gaming community that focuses on quality and seriousness! Join our adventure today to show the undead and other players who's in charge. See you in Nembus! 🌐🕹️
submitted by Ok-Train-9067 to playrustservers [link] [comments]


2024.04.28 21:05 murpleturkey [self] Odds of identical card shuffles, the birthday problem and the birthday attack

There have been lots of interesting social media posts lately making use of the fact that the number of ways a deck of 52 cards can be ordered is astronomically large. Specifically, 52! ways, or 8e67 in scientific notation. It's therefore mathematically impossible that more than a tiny fraction of these possible orders have been actually shuffled since the beginning of time.
These posts take it a step farther, making examples of the number of times you'd have to shuffle a deck before you'd be likely to get an identical ordering. Most examples seem to shoot for 52!/2, or 4e67 shuffles before it's more likely than not that you'd find an identical ordering. I believe these estimates are incorrect, and I'm going to use the classic "birthday problem" to illustrate why.
The birthday problem imagines an empty room, with people walking in one by one. As the population of the room incrementally increases, the problem asks this "at what point is it more likely than not that 2 people in the room share a birthday?" The common, intuitive, and also incorrect answer is 365/2 people, or 183 whole human beings. However, this is answering a different question. 183 people is the point at which it's more likely than not that someone shares a birthday with the FIRST person in the room. What was actually asked was the odds that ANY two people share a birthday. We need to take into account all the possible pairings of people in the room. When we do that, we find that the answer is 23. There are 253 ways to pair up 23 people, far more than half the number of days in a year. This surprisingly low answer is known as the birthday paradox.
Taking it back to the card problem, we can see that 52!/2 is actually the point at which it becomes likely that you'll get a duplicate of shuffle #1. But we're looking for the point that it's likely that ANY two shuffles are identical. It follows then that point must be much, much sooner than 52!/2. Still likely an enormous number, but much smaller than commonly stated. But how would we calculate it? We could try to use the same formula commonly used to solve the birthday problem: find the odds of a unique shuffle for N number of shuffles.
The odds of shuffle 1 being unique: 1, or course.
The odds of shuffle 2 being unique: (52!-1)/52!
The odds of shuffle 3 being unique: (52!-2)/52!
The odds of shuffle 4 being unique: (52!-3)/52!
To get the odds that all of these four shuffles are unique, we need to multiply. So, the probability of all four shuffles being unique is: (52!-1)(52!-2)(52!-3)(52!-4)/52!^4.
Taking it to an arbitrary n number of shuffles, we could calculate the odds like this: (52!-1)(52!-2)...(52!-(n-1))/52!^(n-1)
What we want to know is, at what n do the odds of all shuffles being unique drop below .5?
You can see that calculating numbers this big becomes totally unworkable very quickly. What we need is a way to come up with a good estimate. That's where the Birthday Attack comes in.
The Birthday Attack is a cryptographic attack that tries to find collisions in hash functions. For a hash function f(x)=H, with H being the number of all possible function outputs, how many random inputs would we have to put in until it's more likely than not that we get a duplicate H? The Birthday Attack has an answer: roughly 1.25*sqrt(H).
If we take f(x) to be our card shuffling function, which is already random by nature, H would be the total number of possible ways a 52 card deck can be ordered, which we already know to be 52!. So, we can estimate the point at which a duplicate shuffle becomes more likely than not as 1.25*sqrt(52!), or 1.12e34. Still a huge number, but many factors smaller than the commonly stated 4e67!
If you read this far, thank you for entertaining my random mathematical musings! Below are the links to the wiki entries for the Birthday Problem and the Birthday Attack, which I find incredibly interesting.
https://en.wikipedia.org/wiki/Birthday_problem
https://en.wikipedia.org/wiki/Birthday_attack
submitted by murpleturkey to theydidthemath [link] [comments]


2024.04.28 21:03 LeanyGamerGal Pearsons correlation coefficient in hypothesis testing

Hello, I have a few clarifications when doing this. We have a research due in two days and have had zero lessons about what we have to do to complete a quanti research.
  1. When following the seven steps, we refer to the table of critical values of r for the critical values, right?
  2. When we compare the calculated correlation coefficient with the crit value, what does this mean? Is it the value we get from solving Pearson's r? The one where you need the summation of x, y, xy, x², and y²?
  3. If so, where does the t-value come into place here? The one where you need the standard dev of both variables and the mean of both. I've seen some solve it for a problem needing pearson. Does it even have something to do with it? Because I think the tvalue is only for the ttest?
I'm very sorry if I sound too dumb right now but we have been taught nothing so far. All the things I know rn are just coming from me trying to connect all these tiny bits of information given by the little learning materials shared to us.
submitted by LeanyGamerGal to askmath [link] [comments]


2024.04.28 21:01 Ok_Session_8305 You have been looking for the best and yes we have found a great solution for you

Extensive Channel Selection:

Eight88tv boasts an extensive array of channels catering to diverse tastes and preferences of US and Canadian citizens. From live sports events, popular TV series, news updates to on-demand movies and documentaries, Eight88tv ensures that viewers have access to a plethora of content choices, ensuring there's something for everyone.

Unrivaled Streaming Quality:

When it comes to streaming, quality matters, and Eight88tv doesn't disappoint. With state-of-the-art streaming technology and robust servers, viewers can enjoy smooth and buffer-free streaming, even during peak hours. Whether you're tuning in to catch the big game or binge-watching your favorite series, Eight88tv delivers an immersive viewing experience with pristine picture quality.

Cost-Effective Solution:

In an era of rising cable bills and subscription fatigue, Eight88tv offers a breath of fresh air with its cost-effective pricing model. By providing premium entertainment at a fraction of the cost of traditional cable subscriptions, Eight88tv allows US and Canadian citizens to enjoy high-quality content without breaking the bank, making it an attractive option for budget-conscious viewers.

User-Friendly Interface:

Navigating through a maze of channels and content can be daunting, but Eight88tv simplifies the process with its user-friendly interface. With intuitive navigation and easy-to-use features, viewers can quickly find and access their favorite content with just a few clicks, enhancing the overall viewing experience.

Discreet Access via Telegram:

One of the key advantages of Eight88tv is its discreet access via Telegram. With a dedicated Telegram channel, users can easily join and access Eight88tv's services without drawing unnecessary attention, making it an ideal choice for those who prefer a low-key approach to IPTV.
submitted by Ok_Session_8305 to secondthoughts8 [link] [comments]


2024.04.28 20:30 Correct-Profession84 Question to negative exponents

Why is 5-1 equal to 1/5
I know how I calculate with negative exponents, but what is the science behind this law of:
x-a=1/xa
I know that the 1 is there, because of 50=1, so 5-1=1/5
Also 3*1=1+1+1
But 0,9*0,9=??
How can i display a under-one number multiplication as a addition or just how can i display it?
I know its 0,81 because I just calculate 9*9 and put the comma in the right place
submitted by Correct-Profession84 to askmath [link] [comments]


2024.04.28 20:21 Curious_Category7429 Sample size ANOVA

I have a little confusion in choosing test
1.There are 3 groups.Normal People, People with Non - Blue light , People with Blue Light.
My Alternative Hypothesis is There is significant difference between these 3 groups.
So I decided to choose ANOVA post doc in g power to calculate sample size.
Because I thought it's two tailed test.
And I know the procedure to do ANOVA post doc
2.There are 3 groups.Aged with No disease ,(1,2,3) Early AMD , AMD.
My Alternative Hypothesis is Atleast one group differs significantly from the overall mean of the dependent variable. (Researcher trying to prove AMD is increasing than other group.So I thought this hypothesis is best)
I decided to do prior analysis in g power.I thought it's one tailed.
Am I correct with hypothesis and test?
If it's wrong, someone pls correct it .
submitted by Curious_Category7429 to AskStatistics [link] [comments]


2024.04.28 20:19 Curious_Category7429 Sample size ANOVA

I have a little confusion in choosing test
1.There are 3 groups.Normal People, People with Non - Blue light , People with Blue Light.
My Alternative Hypothesis is There is significant difference between these 3 groups.
So I decided to choose ANOVA post doc in g power to calculate sample size.
Because I thought it's two tailed test.
And I know the procedure to do ANOVA post doc
2.There are 3 groups.Aged with No disease ,(1,2,3) Early AMD , AMD.
My Alternative Hypothesis is Atleast one group differs significantly from the overall mean of the dependent variable. (Researcher trying to prove AMD is increasing than other group.So I thought this hypothesis is best)
I decided to do prior analysis in g power.I thought it's one tailed.
Am I correct with hypothesis and test?
If it's wrong, someone pls correct it .
submitted by Curious_Category7429 to biostatistics [link] [comments]


2024.04.28 20:07 jailbreak627 Passed AT/AT/AT - Exam Tips, Tricks, and Resources

Hey Everyone,
Passed the exam with AT/AT/AT earlier this week. I got a lot of useful information out of this sub so hopefully, some of these tools. tips, and tricks can help someone going down the PMP path. :)
Taking The Exam
So as you know, the test is 4 hours long 180 questions. For pure test taking suggestions I would recommend the below.
Test Itself
Test Observations
Exam Question Tips
Materials Used
Andrew Raymadal 35 Hr. PMP Exam Prep Course (Udemy)
PMI StudyHall
PM Aspirant Process Group Game
PMP Exam-PMI New Format 2024 Mock Simulator (PMBOK7 Updated)
150 PMBOK 7 Scenario Based PMP Exam Questions and Answer
The Complete Project Management Body of Knowledge in One Video (PMBOK 7th Edition)
Materials Aware Of But Didn't Use
Third3Rock Notes
200 AGILE PMP Questions and Answers - the BEST Preparation for the Exam!
100 WATERFALL PMP Questions and Answers - EXCELLENT Preparation for the Exam!
Conclusion
Hopefully some of these tips/resources are able to help you out. Keep grinding and putting in those long hours. It will be worth it. Good luck!
submitted by jailbreak627 to pmp [link] [comments]


2024.04.28 19:39 Striking_Friendship4 Professional line follower

So, as the title suggests, I want to build a 'professional' line follower robot, or one good enough to compete in international competitions. I already have a basic/intermediate knowledge of electronics and robots of this type. I have participated in line follower competitions and even reached the podium. However, I believe the way I've been coding these robots' software is what's been holding them back from being truly great. I usually use a relatively simple PID algorithm that calculates how far from the center of the line the robot is and uses this value to calculate a differential and adjust each wheel's speed.
My question is: How can I improve this? What concepts do I need to learn to program a line follower robot in a truly professional way?
Mostly, I use an arduino nano, a motor driver drv8833, a sensor array with 8 or 16 sensors, and all the basic components you would expect in a line follower
Example of PID code I'm using right now (variables names are in portuguese)
This first function gets the position of the line
int leitura(void) {
for(int i = 6; i>=0; i--) {
sensores[i] = (analogRead(A0 + i));
if(linha==0){
if(sensores[i]<=limiar[i]){digital[i]=0;}else{digital[i]=1;}}
if(linha==1){if(sensores[i]<=limiar[i]){digital[i]=1;}else{digital[i]=0;}}
Serial.print(digital[i]);
Serial.print("\t");
}
somap = (600*digital[0]) + (500*digital[1]) + (400*digital[2]) + (300*digital[3]) + (200*digital[4]) + (100*digital[5]) + (0 * digital[6]);
soma = (digital[0] + digital[1] + digital[2] + digital[3] + digital[4] + digital[5] + digital[6]);
pos = (somap / soma);
if(lastPos <= 100 && pos == -1) {
pos = 0;
}
if(lastPos >= 500 && pos == -1) {
pos == 600;
}
lastPos = pos;
return pos;
}
PID function makes adjustments to the motors speed
void PID() {
proporcional = pos - setpoint;
derivativo = proporcional - last_prop;
integral = erro1 + erro2 + erro3 + erro4 + erro5 + erro6;
last_prop = proporcional;
erro6 = erro5;
erro5 = erro4;
erro4 = erro3;
erro3 = erro2;
erro2 = erro1;
erro1 = proporcional;
int diferencial = (proporcional * KP) + (derivativo * KD) + (integral * KI);
if(diferencial > vel)diferencial = vel;
else if(diferencial < -vel) diferencial = -vel;
(diferencial < 0)?
motores(vel-diferencial, vel) : motores(vel-diferencial, vel);
}
I would really apreciate if someone helped me with this. Articles, youtube videos, etc, are really welcome.
submitted by Striking_Friendship4 to arduino [link] [comments]


2024.04.28 16:06 Fickle-Age7082 [P] Question related to Building and Tuning Support Vector Regression model

I am building a Support Vector Regression model to predict the Run Time of Bus based on some of these features: GPS longtitude, latitude, and segment the bus is on.
I have 2 questions:
  1. I am tryting to tune the hyperparameters in order to achieve the best results by using gridSearchCV to search for the best params in a param grid. Am i following the right approach and are there any better ways i can implement my model to get the best hyperparameters?
  2. I try to run the code in both Kaggle and Google Collab but it runs for hours and finnaly cannot execute and timed out. Is this because that my dataset is too large? I have 130464 records in my dataset. Below is the code of my model. I really appreciate if you guys can take a look.

from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.svm import SVR from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline from sklearn.metrics import mean_squared_error # Separate features and target variable X = df[['segment_latitude', 'segment_longitude', 'segment']] y = df['segment_run_time'] # Split the data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a pipeline with scaling and SVR pipeline = Pipeline([ ('scaler', StandardScaler()), ('svr', SVR(kernel='rbf', gamma='scale')) ]) # Define the parameter grid param_grid = { 'svr__C': [0.1, 1, 10], # Different values of C 'svr__epsilon': [0.1, 0.2, 0.5] # Different values of epsilon } # Perform grid search grid_search = GridSearchCV(pipeline, param_grid, cv=5, scoring='neg_mean_squared_error') grid_search.fit(X_train, y_train) # Get the best parameters best_params = grid_search.best_params_ print("Best Parameters:", best_params) # Predict on testing set using the best estimator best_estimator = grid_search.best_estimator_ y_pred = best_estimator.predict(X_test) # Evaluate the model mse = mean_squared_error(y_test, y_pred) print("Mean Squared Error:", mse) # Calculate RMSE rmse = mean_squared_error(y_test, y_pred, squared=False) print("Root Mean Squared Error:", rmse) 
submitted by Fickle-Age7082 to MachineLearning [link] [comments]


2024.04.28 15:16 Iprosmartv The Pros and Cons of 1-Month Vs. Annual IPTV Commitments

When browsing IPTV subscription options, buyers face a choice between month-to-month flexibility versus longer-term discounts. Premium IPTV Services (premiumiptvservices.com) evaluates the main pros and cons for 1-month versus 12-month commitments.

1-Month Pros

However, monthlies also carry certain drawbacks.

1-Month Cons

12-Month Pros

However, lengthier contracts also involve some disadvantages.

12-Month Cons

In truth, neither monthly nor annual subscriptions unequivocally dominate in all scenarios. Individual circumstances like household needs, financial factors, risk-tolerance and technical expertise influence the optimal term.
Premium IPTV Services recommends weighing both short and long-term benefits with one's own unique requirements to decide whether monthly flexibility or annual discounts suit best. Buyers choose the strategy maximizing value and satisfaction over their anticipated usage period.
Overall, annual commitments offer maximum discounts and stability suiting stable needs while monthlies preserve flexibility for variable or less-committed situations. No single answer prevails for every individual cord-cutter.
submitted by Iprosmartv to PremiumIPTVServices1 [link] [comments]


2024.04.28 15:16 Iprosmartv The Pros and Cons of 1-Month Vs. Annual IPTV Commitments

When browsing IPTV subscription options, buyers face a choice between month-to-month flexibility versus longer-term discounts. Premium IPTV Services (premiumiptvservices.com) evaluates the main pros and cons for 1-month versus 12-month commitments.

1-Month Pros

However, monthlies also carry certain drawbacks.

1-Month Cons

12-Month Pros

However, lengthier contracts also involve some disadvantages.

12-Month Cons

In truth, neither monthly nor annual subscriptions unequivocally dominate in all scenarios. Individual circumstances like household needs, financial factors, risk-tolerance and technical expertise influence the optimal term.
Premium IPTV Services recommends weighing both short and long-term benefits with one's own unique requirements to decide whether monthly flexibility or annual discounts suit best. Buyers choose the strategy maximizing value and satisfaction over their anticipated usage period.
Overall, annual commitments offer maximum discounts and stability suiting stable needs while monthlies preserve flexibility for variable or less-committed situations. No single answer prevails for every individual cord-cutter.
submitted by Iprosmartv to IprosmarTV [link] [comments]


2024.04.28 15:16 Iprosmartv The Pros and Cons of 1-Month Vs. Annual IPTV Commitments

When browsing IPTV subscription options, buyers face a choice between month-to-month flexibility versus longer-term discounts. Premium IPTV Services (premiumiptvservices.com) evaluates the main pros and cons for 1-month versus 12-month commitments.

1-Month Pros

However, monthlies also carry certain drawbacks.

1-Month Cons

12-Month Pros

However, lengthier contracts also involve some disadvantages.

12-Month Cons

In truth, neither monthly nor annual subscriptions unequivocally dominate in all scenarios. Individual circumstances like household needs, financial factors, risk-tolerance and technical expertise influence the optimal term.
Premium IPTV Services recommends weighing both short and long-term benefits with one's own unique requirements to decide whether monthly flexibility or annual discounts suit best. Buyers choose the strategy maximizing value and satisfaction over their anticipated usage period.
Overall, annual commitments offer maximum discounts and stability suiting stable needs while monthlies preserve flexibility for variable or less-committed situations. No single answer prevails for every individual cord-cutter.
submitted by Iprosmartv to u/Iprosmartv [link] [comments]


http://swiebodzin.info