I was able to get speeds of 71 MPH, now can anyone tell me exactly what I did because my approach was just to randomly change values, train and run.
I read the documentation and watched the first video and read all of the lecture slides but I haven't really grasped any of the concepts. Like, is 400 num_neurons a lot? Not a lot? Why would looking backward help at all? Why have that as a variable? It suggests to increase the train_iterations from 10000. Does that mean 15k, 100k, 1000k? Was that the point of the exercise, just to get comfortable with making changes and running simulations or should I have come away with a deeper understanding?
If anyone feels like they could teach me I'd be willing to pay $100 for a half hour of your time, skype or Cambridge/Boston if you are local.
I haven't played around with it much, but look at the layers of the network. First layer is the inputs which is pretty much one for each input square. If you want to be fairly optimal figure you need to look at 5 lanes total and at least 10 squares ahead (and maybe a few behind if you need to get out of a jam) so that's 50 inputs per frame but it is taking inputs from 4 frames so that's 200+ inputs in the first layer (plus 15 for the actions for the last 3 frames).
The output actions are 5 so that is the size of the output layer.
You can change the number of nodes in the middle layer but you can also cut-and-past that code to add additional layers.
I think you probably need at least two layers, the first to determine which lanes are clear and the second for figuring out how to maneuver into the desired lane.
With such a large network, training will take longer so stick with the default iterations until you feel like you need more fine tuning.
On second thought, I would say start with smaller models and work your way up. More complicated models take much longer to train, and keep training them until they plateau. Turn off the overlay and train on fast speed to get there faster.
Once you are satisfied with a model, change one thing and see how much of an improvement you get, but it is a time-consuming process. I have not fiddled with any of the "opt." options yet.
400 num_neurons is quite a lot, i trained a facial expression classifier in college a few years ago from images i got Online. That was for a basic feed forward classifier though, this is different.
The input layer was large but the hidden layer was only 12 neurons.
I'm still learning myself so i wouldn't be able to teach you. There are some great courses online.
400 units is very small. Today's image classification neural nets have hundreds of thousands of units (units is my preferred non-biological synonym for "neuron")
I read the documentation and watched the first video and read all of the lecture slides but I haven't really grasped any of the concepts. Like, is 400 num_neurons a lot? Not a lot? Why would looking backward help at all? Why have that as a variable? It suggests to increase the train_iterations from 10000. Does that mean 15k, 100k, 1000k? Was that the point of the exercise, just to get comfortable with making changes and running simulations or should I have come away with a deeper understanding?
If anyone feels like they could teach me I'd be willing to pay $100 for a half hour of your time, skype or Cambridge/Boston if you are local.