.. (לתיקייה המכילה) | ||
In part one, what should be the total size of the training data after using data augmentation? | |
number_of_batches * mini_batch_size images for every call for create_batches |
In part one, in the HW assignment, it said the Worker's init arguments are jobs, result and anything else that we decide to add, but in the provided file, preprocessor.py, there are two more arguments. What should we do about them? | |
You can either remove training_data and batch_size or if you use them - keep them, your choice. |
In part 3, the link to further explanations doesn't work for me. Can you send it to me? | |
http://setosa.io/ev/image-kernels/ |
I implemented part one of the HW without overriding the function fit. Is that alright? | |
No. You should not create the workers within create_batches, but before calling the super fit function. |
In part 1, what is the range for angle, tilt, dx and dy, and steps? | |
Your functions should support the following ranges: angle is between 45 and -45 (including the edges), tilt is between 1 and -1 (including the edges), dx, dy are between 28 and -28 (not including the edges), steps is at least 2. However, this does not mean you have to use these ranges in your augmentation. Try to find the best values for improving accuracy. |
In part 1, x+y*tilt is a float what should I do? | |
round it (using the python function for it) |
In part 3, I compared my implementation's output to the numba function and there is a slight difference (under 1-e9) between all values. Is that alright? | |
Yes. Comparing floats isn't exact. And you should use math.isclose with rel_tol=1e-9 to compare instead. |
multiprocessing.cpu_count() always returns 64. How do I get the number of CPUs in use of the program? | |
You may use the following code to get the correct number of CPUs in use when running your program on the course's server: import os num_cpus = int(os.environ['SLURM_CPUS_PER_TASK']) |
In part 1, what is the shape of the input and output image in the functions shift, skew,step_func and rotate? | |
The input and output should be an numpy array of size 784. |
In Part 3, how are you intending to invoke convolve2d? | |
scipy.signal.convolve2d(image, kernel, 'same') Notice the order of parameters - our functions get the kernel first, and then the image. |
In Part 2, which methods of Queue do we need to implement? | |
All of the methods required from Part 1's Queue |
I'm getting the following error on the servers: | |
The script at the hw has been updated, please run it again. |
In Part 3, are we allowed to use numpy functions? | |
You are allowed to use numpy's types and allocation functions, the rest should be implement by yourself. |
In Part 1, does each worker create a single image at a time, or a batch? | |
Each worker creates a batch at a time. |
How to kill a job ? | |
In a different terminal (if current terminal is stuck): 1) run `ps -ef | grep <username> | grep srun` to get the pid of the running job (pid of the request process). 2) kill it using `kill -9 <pid>` |
When requesting 32 cores, i got the error: Unable to allocate resources: Requested node configuration is not available | |
Require 2 gpu's instead of one. |