Trying to Predict the Price of Bitcoin

John Cook
4 min readDec 19, 2020

--

What is the best way to predict the future? I would argue that a subset of machine learning called “Time Series Forecasting” is. This is due to the fact that it takes into account the past and possibly the future to predict data. The way it uses the past to predict the future is pretty straight forward, it takes into account trends and makes predictions based on these trends, but then it makes future predictions based on the prior predictions too. You tie all that up with the magic that is machine learning and you get a very powerful tool.

I chose to predict the price of Bit coin, and was met with some limited success. I chose to take a minimalist approach as that method is often the best, however I think it may have crippled my results in the end, so I am not sure it was the best idea this time. By minimalist approach I mean to say that I was trying to use as little data as possible to get the desired results. To that effect I used only the close price (and eventually the open price, but that data was lost) to make my predictions. The constraints of this project where to use only 24 hours of data, with that in mind I decided to use 1 data point for hour. That being the close price within that hour. This method turned out to be not so good, but I must move onto other projects at this time. I chose to use the pandas libary for my data preprocessing as I am familiar with it and did not want to get rusty, I could have just as easily use numpy but chose not to this time.

Another constraint of the project was to use Tensor Flow datasets for our data, I was very successful here and was able to produce and use TF datasets. I did this by first saving my preprocessed pandas dataset into a csv file, I could have loaded it straight from the file but chose to save it so that I have something to look at if there are errors or nasty Nans. If I had more time I would have eventually loaded everything from memory instead of disk. Next I used numpy to load the csv’s into numpy arrays. I then reshape the data into 3 dimensional data by adding an extra axis to it. This does not change the values of the data but changes how it is presented. Next I chose to make my training and validation values into a tf data set using “tf.data.Dataset.from_tensor_slices” I then took the targets (the expected prediction of the data, 1 hour in advance) for both the training and validation data sets and put those into a data set using the same method above. However the values where 24 hours worth of data and the targets only 1 so I had to use “tf.data.Dataset.zip” on them to make them into a dataset tuple for the model.

I used these datasets on an LSTM, a long short term memory layer of keras, and had very poor results as I could not get my model to learn. (I eventually did get it to learn by scaling the values but could not get it to unscale properly, and I lost 2 days of work due to a blue screen of death unfortunately so I abandoned this approach.) An LSTM works by using a longer short term memory then an standard RNN, Recurrent neural network. This is achieved by having the inputs also fed to the outputs. It then decides if a trend is important or not. My problem was that it thought that the start of the data set (or roughly there about) was the only important thing. And so would only return one value no matter what, like I stated earlier I tried to fix this with scaling my data, and had some success unfortunately I lost that data, and it did not work entirely correctly anyways.

Below are my best results:

Red is the data and blue is my prediction, note that it only valued the first data point(my training data)

And the loss of my model (note this is an exaggerated loss as I trained for far too long to scout out the general shape of my graph):

Red is the validation dataset and blue is the training dataset

Depending on what data set is used for the loss and how it was trained I would get something like the picture above but could not do much better then above.

If I had more time I would have figured out how to scale my data properly, and then used the open and close, every 15 minutes so I had 4 times as much data. I would then use a bidirectional LSTM(I experimented with but without learning it only doubled the training time). I know it can be done as I saw almost perfect results with my scaled data but was unable to properly upscale it into a dollar price again after it went through the model. If I could go back and do it all over again these are the changes I would make.

GitHub: https://github.com/JohnCook17/holbertonschool-machine_learning/tree/master/supervised_learning/0x0E-time_series

--

--