Tuesday, December 8, 2020

Predicting Bitcoin Prices Using ML.Net and Time Series Techniques

Obligatory Machine Learning Stock Predictor 

It seems that anyone who starts to learn machine learning and analyzing big data thinks they can predict patterns in something as volatile as the stock market. Once you go down the rabbit hole it is easy to see that things are not random, we just need enough data so it is easy to see the attraction to trying to predicting stock prices.

Unfortunately, there are many external factors that data cannot predict. This was very clear when the 2020 pandemic hit, and stock prices tanked. Even so, trying is fun and helps us understand concepts. 

I am not a stock guru, and everything is this post is purely for learning purposes of ML.Net and how to use it for the time series function. Please use the code at your own risk for anything beyond learning how to create a time series model.

What is a Time Series Model?

Time Series in machine learning is trying to predict out over several periods. This is different from Regression which predicts the next period from a series. Some of the more common examples are housing prices, gas prices, and sales predictions. In big data analytics we can use linear regression to plot thousands of points and find the average over those points to create a line representing our answer. This is a good route, if you have two values, for example price/date you can create a nice graph that can represent sales.


Data prep

Before getting into the coding, data prep is key here. We are predicting items over time, so you want to know a time frame. Sales data for example, you usually compare year over year which is last year's sales vs this year's sales. We would use our time series model to predict the next 3 years sales or So you would break your data into yearly chunks usually daily prices. 


Create NLP FAQ Application in 5 Minutes

Data


With our Bitcoin example we will be breaking our data into daily changes in price with 1 minute increments. We will be using the btc.csv you can find here on my github page https://github.com/fiveminutecoder/blogs/blob/master/%20mlnet_BTCTimeSeries/btc.csv. *Update, it was found that this file was not consistent in it's time stamps so the code has been updated to use the following data set from Kaggle Bitcoin Historical Data. It is too large to add to the Git Hub site, this dataset was loaded by Zielak.

Our dataset has multiple columns, but when forecasting we are tracking a value over time, so only price will be used.

Creating the project

If you have not read my previous blog "Getting Started with ML.NET", which can be found here https://fiveminutecoder.blogspot.com/2020/07/getting-started-with-mlnet.html, please do so before continuing.

For this project we will continue to use the Microsoft.ML nuget package, but we will also need to add the Microsoft.ML.TimeSeries nuget package to get the forecasting estimator.

Once you have created a project and installed the Nuget packages, we can go ahead and create our data models to be used for our training data and our predictions. 

The first model is the BTCDataModel. It contains our pricing and timestamps we will use for training.


using Microsoft.ML.Data;

namespace mlnet_BTCTimeSeries
{
    public class BTCDataModel
    {
        [LoadColumn(0)]
        public int TimeStamp {get;set;}

        [LoadColumn(1)]
        public float Open {get;set;}
        [LoadColumn(2)]
        public float High {get;set;}
        [LoadColumn(3)]
        public float Low {get;set;}
        [LoadColumn(4)]
        public float Close {get;set;}

        [LoadColumn(5)]
        public float Volume {get;set;}
        [LoadColumn(6)]
        public float Currency {get;set;}

        [LoadColumn(7)]
        public float Amount {get;set;}
    }
}

Once we have created our training model, let's create our prediction model. The time series prediction model is slightly different than our supervised learning model. Instead of an array of confidence scores, we will have our prediction with an upper and lower bounds for accuracy.


using Microsoft.ML.Data;

namespace mlnet_BTCTimeSeries
{
    public class PredictedSeriesDataModel
    {
        public float[] ForecastedPrice { get; set; }
        public float[] ConfidenceLowerBound { get; set; }
        public float[] ConfidenceUpperBound { get; set; }
    }
}

Now that our models are setup, we can go ahead and add our context and training dataset place holders. We will set these globally so we can easily access them in our functions


using System;
using System.Linq;
using System.IO;
using Microsoft.ML;
using Microsoft.ML.Transforms.TimeSeries;
using System.Collections.Generic;

...

static MLContext mlContext = new MLContext();
//last time to show the times of our future predictions
static DateTime lastTime; 
static List trainingData;
static List testingData;
static string fileName = "btcModel.zip";
//how far out we want to predict
static int horizon = 5; 
//holds or in memory model
static ITransformer forecastTransformer; 

The first step or our project is to create our training and testing data, our testing data will be the size of our horizon field since that is the number we are trying to forecast, everything else will be used for training. We will also capture the last timestamp from the training model, this is used to show the next 5 predictions match the time from our test data.


static void GetTrainingData()
{
	//load our dataset
	IDataView trainingDataFile = mlContext.Data.LoadFromTextFile("bitstampUSD_1-min_data_2012-01-01_to_2020-12-31.csv", hasHeader: true, separatorChar: ',');

	//create enumerable to manipulate data
	List data = mlContext.Data.CreateEnumerable(trainingDataFile, false, true).ToList();

	//times in the data set are not uniform, so we will pull unique time values
	data = data.OrderBy( o => o.TimeStamp).ToList();

	//determines the size of our testing data
	int dataSubset = data.Count() - horizon;

	//create our training data up to the dates we are trying to predict
	trainingData = data.GetRange(0, dataSubset).ToList(); 

	// will get the number of items we are trying to predict
	testingData = data.GetRange(dataSubset, horizon);
	
	//We want to capture time of last item in training data so we can increment the time stamp for our output and put a date/time to the forecast
	lastTime = ConvertTimeStamp(trainingData.Last().TimeStamp);
}

//helper for converting timestamp to date time
static DateTime ConvertTimeStamp(double TimeStamp)
{
	var offset = TimeSpan.FromSeconds(TimeStamp);
	DateTime startTime = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);
	return startTime.Add(offset).ToLocalTime();
}

With our data sets created, we can move on to creating our estimator. Unlike the supervised learning estimator, there is very little data manipulation. With the time series we just need to input the property names from our models to the appropriate columns. For this we will "nameof" to get our property names instead of labeling each property. There are a lot of settings in the forecasting estimator, the 3 main settings to worry about are "windowSize", "seriesLength", and "horizon". Window size is the periods on which our data is to reflect, series is the timespan of those windows, and finally horizon is the forecasting we want to produce or how far out we want to predict. 


static void TrainModel()
{
	IDataView trainingDataView = mlContext.Data.LoadFromEnumerable(trainingData);

	// creates our estimater, as you cans see we are using forecasting estimator
	var estimator = mlContext.Forecasting.ForecastBySsa(outputColumnName: nameof(PredictedSeriesDataModel.ForecastedPrice),
					inputColumnName: nameof(BTCDataModel.Amount), //column used for time series prediction
					windowSize: 60, //series is sampled in 60 minute windows or periods, and the past 60 minutes will be used to make the prediction
					seriesLength: 1440, //we want to train over a day's worth of time so this will be the interval, we have 1440 minutes in a day
					trainSize: trainingData.Count(), //how many data points we want to sample
					horizon: horizon,
					confidenceLevel: 0.45f, //sets our margin of error, lower the confidence level the smaller the upper/lower bounds
					confidenceLowerBoundColumn: nameof(PredictedSeriesDataModel.ConfidenceLowerBound),
					confidenceUpperBoundColumn: nameof(PredictedSeriesDataModel.ConfidenceUpperBound)
	);

	//creates our fitted model
	forecastTransformer = estimator.Fit(trainingDataView); 
}

One thing to notice in our trainer is that we are not calling save yet, this is will happen later in our predict function since time series data needs updating to stay relevant, we save a little differently. Our data is now fitted to our estimator, we can now predict our future Bitcoin prices.


static void Predict()
{
	//prediction engine based on our fitted model
	 TimeSeriesPredictionEngine forecastEngine = forecastTransformer.CreateTimeSeriesEngine(mlContext);

	//call to predict the next 5 minutes
	 PredictedSeriesDataModel predictions = forecastEngine.Predict();

	 //write our predictions
	 for(int i = 0; i < predictions.ForecastedPrice.Count(); i++)
	{   
		lastTime = lastTime.AddMinutes(1);
		Console.WriteLine("{0} price: {1}, low: {2}, high: {3}, actual: {4}", lastTime, predictions.ForecastedPrice[i].ToString(), predictions.ConfidenceLowerBound[i].ToString(), predictions.ConfidenceUpperBound[i].ToString(), testingData[i].Amount);
	}


	//instead of saving, we use checkpoint. This allows us to continue training with updated data and not need to keep such a large data set
	//so we can append Jan. 2021 without having everythign before to train the model speeding up the process
	forecastEngine.CheckPoint(mlContext, fileName);
}

The predict function is very similar to the rest of our predict functions, we visualize our data a bit different since we want to see all our our values not just the most accurate. At the end of this function we called "CheckPoint" on our forecastEngine. This creates a saved model that we can continue to train and add data points without retraining the entire model.


With our training and prediction functions complete, all that is left is calling them and viewing our results.


static void Main(string[] args)
{
	GetTrainingData();
	TrainModel();
	Predict();
}

Results

As we can see, the forecaster is fairly accurate predicting the next minute's price (off by about $5) but unfortunately our model predicted a downward tend instead of an upward one as we see comparing to our actual results. This just shows price history alone is not enough to predict the stock market.



Clone the project


You can find the full project on my GitHub site here https://github.com/fiveminutecoder/blogs/tree/master/%20mlnet_BTCTimeSeries
AI, Artificial Intelligence, BTC, BitCoin, C#, Time Series, dotnet, dot net, machine learning, mldotnet, ml.net, dotnet core, dotnet 5, .NET 5

Tuesday, November 10, 2020

Using ML.NET for Natural Language Processing (NLP) in 5 minutes

 What is Natural Language Processing?

Natural language processing, or NLP, is taking text and and converting it to something your application can use. What we are expecting is for someone to type in a word or sentence and the application is able to understand and process the command. The challenge here is not everyone communicates the same way for example:

  • Please save document.
  • Save document.
  • Update Document.
  • I need my document to be put into my accounting folder.

All can be interpreted as a "Save" command. This is a great task for machine learning. We can train our algorithm to interpret what the user is trying to communicate, and complete the task. If you are not familiar with the basics of ML.NET or supervised learning, please check out my previous post https://fiveminutecoder.blogspot.com/2020/07/getting-started-with-mlnet.html.

Create NLP FAQ application in 5 minutes

The Data

I have created a basic csv file with several FAQ questions and answers for a fictional business, you can download the file here which is part of the GitHub repository https://github.com/fiveminutecoder/blogs/tree/master/mlnet_NLP.

Creating the project

If you have not read my previous blog "Getting Started with ML.NET", which can be found here https://fiveminutecoder.blogspot.com/2020/07/getting-started-with-mlnet.html please do so before continuing since we will be referencing back to it frequently. Following our previous example, we will create a new console application called "mlnet_NLP". Once the project is created we will download the "Microsoft.ML" package from Nuget.

Once you have the project setup we will need to create our two data models, one for the input features of the FAQ (our question and answer), and one for our predictions.


	public class FAQModel
	{
		[ColumnName("Question"), LoadColumn(0)]
		public string Question {get;set;}
		[ColumnName("Answer"), LoadColumn(1)]
		public string Answer {get;set;}
	}



	public class PredictionModel
	{
		[ColumnName("PredictedAnswer")]
		public string PredictedAnswer {get;set;}
		[ColumnName("Score")]
		public float[] Score {get;set;}
	}


Once we have our data models setup, we need to setup our program. We will use the same 5 functions from our pervious example which will train, test, predict, save, and load our machine learning model. Before we begin, we need to add our references and fields that will hold our context and data. Again, this should be a separate class, but for the sake of time we will do all this in our program.cs.


	//Main context
	static MLContext context;
	//model for training/testing
	static Microsoft.ML.Data.TransformerChain model;
	static IEnumerable trainingData;
	static IEnumerable testingData;
	static string fileName = "FAQModel.zip";

Our training function will be very similar to the previous one, with the exception to how to create our features. Instead of our features being several columns, we have 1 column with several words. Luckily ML.Net has a function that lets us featurize text making it a quick swap of our previous concatenate features line. The FeaturizeText method is very powerful and performs several operations under the hood, like remove stop words like the, and, or, etc. To learn more, visit Microsoft's documentation around preparing data https://docs.microsoft.com/en-us/dotnet/machine-learning/how-to-guides/prepare-data-ml-net


	static void TrainModel()
	{
		context = new MLContext();

		//Load data from csv file
		var data = context.Data.LoadFromTextFile("faq.csv", hasHeader:true, separatorChar: ',', allowQuoting: true, allowSparse:true, trimWhitespace: true);
		

		//create data sets for trainiing and testing
		trainingData = context.Data.CreateEnumerable(data, reuseRowObject: false);
		testingData = new List()
		{
			new FAQModel() {Question = "When are you open?", Answer = "Our hours are 9 am to 5pm Monday through Friday"},
			new FAQModel() {Question = "Can i pay using a visa card?", Answer =  "Our payment options are Credit, Check, or Bitcoin"},
			new FAQModel() {Question = "How can i contact you.", Answer = "Our phone number is 555-5555 and our fax is 555-5557"}
		};

		//Create our pipeline and set our training model
		var pipeline = context.Transforms.Conversion.MapValueToKey(outputColumnName: "Label", inputColumnName: "Answer") //converts string to key value for training
			.Append(context.Transforms.Text.FeaturizeText( "Features","Question")) //creates features from our text string
			.Append(context.Transforms.Text.f)
			.Append(context.MulticlassClassification.Trainers.SdcaMaximumEntropy(labelColumnName: "Label", featureColumnName: "Features"))//set up our model
			.Append(context.Transforms.Conversion.MapKeyToValue(outputColumnName: "PredictedAnswer", inputColumnName: "PredictedLabel")); //convert our key back to a label

		//traings the model
		 model = pipeline.Fit(context.Data.LoadFromEnumerable(trainingData));
	}

Now that our training method is setup, we want to test our model. This FAQ is too small to break up, so the accuracy will return as 0. To remedy this, I  manually added a couple tests to our enumerable.


	static void TestModel()
	{
		//transform data to a view that can be evaluated
		IDataView testDataPredictions = model.Transform(context.Data.LoadFromEnumerable(testingData));
		//evaluate test data against trained model for accuracy
		var metrics = context.MulticlassClassification.Evaluate(testDataPredictions);
		double accuracy = metrics.MacroAccuracy;

		Console.WriteLine("Accuracy {0}", accuracy.ToString());
	}

Now that our model is trained, we will save it so it can be loaded in our prediction engine. 


	static void SaveModel()
	{
		IDataView dataView = context.Data.LoadFromEnumerable(trainingData);
	       context.Model.Save(model, dataView.Schema, fileName);
	}



	static ITransformer LoadModel()
	{
		DataViewSchema modelSchema;
		//gets a file from a stream, and loads it
		using(Stream s = File.Open(fileName, FileMode.Open))
		{
			return context.Model.Load(s, out modelSchema);
		}
	}

Now we can setup our prediction engine. Again this is exactly how we set it up in the previous example. Our NLP uses a multiclass supervised learning model so predicting our answer is handled the same; pass our question in, and the machine learning algorithm will spit out an answer.


	static void Predict(FAQModel Question)
	{
		ITransformer trainedModel = LoadModel();

		//Creates prediction function from loaded model, you can load in memory model as wwell
		 var predictFunction = context.Model.CreatePredictionEngine(trainedModel);
		 
		//pass model to function to get prediction outputs
		PredictionModel prediction = predictFunction.Predict(Question);

		//get score, score is an array and the max score will align to key.
		float score = prediction.Score.Max();
	
		Console.WriteLine("Prediction: {0},  accuracy: {1}", prediction.PredictedAnswer, score);

	}

Now that we have our functions setup, we can call them in the static main function and start answering questions.


	static void Main(string[] args)
	{
		TrainModel();
		TestModel();
		SaveModel();
		FAQModel question = new FAQModel(){
			Question = "can i Pay online?",
			Answer = ""
		};

		Predict(question);
	}

Clone the project


you can find the full project on my GitHub site here https://github.com/fiveminutecoder/blogs/tree/master/mlnet_NLP
AI, Artificial Intelligence, C#, NLP, Natural Language Processing, supervised learning, dotnet, dot net, machine learning, mldotnet, ml.net, dotnet core, dotnet 5, .NET 5

Tuesday, October 6, 2020

Get Started with ML.NET in 5 Minutes



What is ML.NET

ML.NET is a dot net based machine learning language created by Microsoft. It allows us to use C# to quickly create various machine learning algorithms using built in training methods. ML.NET also has a way to extend to the language to tap into other machine learning platforms such as TensorFlow for actions that are not yet supported by ML.NET.

What is supervised learning?


Supervised learning is when we train a model with known labels for our data. The learning is supervised because we are able to give the training algorithm the correct answer for what the data represents. When training a real model, you will want a large data set representing different scenarios for your model.

Create a Supervised Learning Model in about 5 minutes.


The Data Set


For this example, we will be using the Iris Flower Species data set which can be found on the Kaggle website here https://www.kaggle.com/uciml/iris.

Create the Project


Once you have downloaded the data set, we need to create the project. Since ML.NET is so new it is worth noting that this article was written using version 1.51 and dot net core 3.1. As machine learning evolves some of these techniques may change.

To start, create a new dot net core console application called "mlnet_intro"

        dotnet new console –-name “mlnet_intro”

Now that we have our new project make sure you have the folder open, and add the nuget package "Microsoft.ML". If you are using VSCode, use CTRL+SHIFT+P to search for the package.

Data Models


We now have all the necessary components to start creating our supervised learning application. We will need 2 data models for our model one representing the Iris being fed into the model, one for displaying results. We will create our Iris model aptly named "IrisModel". 


	using Microsoft.ML.Data;

        namespace mlnet_intro
        {
            public class IrisModel
            {
                [ColumnName("Id"), LoadColumn(0)]
                public int Id {get;set;}
                [ColumnName("SepalLengthCm"), LoadColumn(1)]
                public float SepalLengthCm {get;set;}
                [ColumnName("SepalWidthCm"), LoadColumn(2)]
                public float SepalWidthCm {get;set;}
                [ColumnName("PetalLengthCm"), LoadColumn(3)]
                public float PetalLengthCm {get;set;}
                [ColumnName("PetalWidthCm"), LoadColumn(4)]
                public float PetalWidthCm {get;set;}
                [ColumnName("Species"), LoadColumn(5)]
                public string Species {get;set;}

            }
        }


Notice that we have attributes for ColumnName and LoadColumn which come from the using statement Microsoft.ML.Data. LoadColumn is the column found in our CSV, Column name is how we will refer to when training our model. This is important to remember so that our label is not part of the data being trained, in this case the column named "Species" is our label.

Next, we need to create our prediction model called "PredectionModel". Again we will have an attribute called "ColumnName" so we can map the model to our training output. Predicted Species will represent the label, and Score is the confidence levels for each label.

        using Microsoft.ML.Data;

        namespace mlnet_intro
        {
            public class PredictionModel
            {
                [ColumnName("PredictedSpecies")]
                public string PredictedSpecies {get;set;}
                [ColumnName("Score")]
                public float[] Score {get;set;}
            }
        }


Create the Iris Prediction Application


Now that we have our two models created, we can create our application that will train our AI model for predicting Iris species. In the Program.cs file we will need to create some fields for holding our model context, along with referencing the Microsoft.ML namespace. Typically this would be a separate class, but we are getting close to 5 minutes. 

    
        using System;
        using System.IO;
        using System.Linq;
        using System.Collections.Generic;
        using Microsoft.ML;
        
        
        static MLContext context;
        //model for training/testing
        static Microsoft.ML.Data.TransformerChain model;
        static IEnumerable trainingData;
        static IEnumerable testingData;
        
        static string fileName = "irisModel.zip";



With our global variables defined, the next thing we must do is train our model. In order to do that we must load our csv data, then we will split the data into training and testing data. We then need to tell our training model the columns used to represent our features and our labels, and select a training method. In this case we will use the multiclass classification trainer. Finally we want to map the predicted value back to our prediction model.


        static void TrainModel()
        {
            
            //Load data from csv file
            var data = context.Data.LoadFromTextFile("datasets_19_420_Iris.csv", hasHeader:true, separatorChar: ',', allowQuoting: true, allowSparse:true, trimWhitespace: true);
            
            //Splits data into training and testing data
            //Id is the unique key to keep labels from duplicating
            var split = context.Data.TrainTestSplit(data);
            
             
            //create data sets for trainiing and testing
            trainingData = context.Data.CreateEnumerable(split.TrainSet, reuseRowObject: false);
            testingData = context.Data.CreateEnumerable(split.TestSet, reuseRowObject: false);


            //Create our pipeline and set our training model
            var pipeline = context.Transforms.Conversion.MapValueToKey(outputColumnName: "Label", "Species") //converts string to key value for training
                .Append(context.Transforms.Concatenate("Features", new[]{"SepalLengthCm", "SepalWidthCm", "PetalLengthCm", "PetalWidthCm"})) //identifies training data from model
                .Append(context.MulticlassClassification.Trainers.SdcaMaximumEntropy(labelColumnName: "Label", featureColumnName: "Features")) //set trainer and identifies features and label
                .Append(context.Transforms.Conversion.MapKeyToValue(outputColumnName: "PredictedSpecies", inputColumnName: "PredictedLabel")); //convert prediction to string PredictedLabel is output label key for predict

            //traings the model
             model = pipeline.Fit(context.Data.LoadFromEnumerable(trainingData));



        }

In the training method, the main thing to note are the two lines "MapValueToKey" and "MapKeyToValue". What this is doing is taking our string for our label and creating a key value. This will allow our prediction model to return a string value for the Iris name instead of the numeric value.

Now that our model is trained, we want to test it against our test data and check it's accuracy. ML.Net has this build into the training model.

        static void TestModel()
        {
            //transform data to a view that can be evaluated
            IDataView testDataPredictions = model.Transform(context.Data.LoadFromEnumerable(testingData));
            //evaluate test data against trained model for accuracy
            var metrics = context.MulticlassClassification.Evaluate(testDataPredictions);
            double accuracy = metrics.MicroAccuracy;

            Console.WriteLine("Accuracy {0}", accuracy.ToString());

        }

Accuracy may vary on this since it is a small dataset, this is for learning so we are not too concerned. Next we will save and load the model to and from a file. This is helpful for re using your model in web applications or other services. 

	static void SaveModel()
        {
            IDataView dataView = context.Data.LoadFromEnumerable(trainingData);
           context.Model.Save(model, dataView.Schema, fileName);
        }

        static ITransformer LoadModel()
        {
            DataViewSchema modelSchema;
            //gets a file from a stream, and loads it
            using(Stream s = File.Open(fileName, FileMode.Open))
            {
                return context.Model.Load(s, out modelSchema);

                
            }
         }


Finally, we can now use our newly saved model to predict Iris.

	static void Predict(IrisModel iris)
        {
            ITransformer trainedModel = LoadModel();

            //Creates prediction function from loaded model, you can load in memory model as well
             var predictFunction = context.Model.CreatePredictionEngine(trainedModel);
             
            //pass model to function to get prediction outputs
            PredictionModel prediction = predictFunction.Predict(iris);

            //get score, score is an array and the max score will align to key.
            float score = prediction.Score.Max();
        
            Console.WriteLine("Prediction: {0},  accuracy: {1}", prediction.PredictedSpecies, score);

        }


Our AI setup is complete, we just need to call our newly created methods and see the results. In my example I feed the species as "hello". This is to demonstrate that the model did not cheat and use the label as a feature.

	static void Main(string[] args)
        {
        	context = new MLContext();
            Console.WriteLine("Training Iris Model");
            TrainModel();
            Console.WriteLine("Testing Iris Model");
            TestModel();
            SaveModel();

            IrisModel test = new IrisModel(){
                    SepalLengthCm = 5.2f,
                    SepalWidthCm = 3.5f,
                    PetalLengthCm = 1.4f,
                    PetalWidthCm = 0.2f,
                    Species = "hello"
                };

            Predict(test);

            Console.Read();
        }


Clone the project


you can find the full project on my GitHub site here https://github.com/fiveminutecoder/blogs/tree/master/mlnet_intro
AI, Artificial Intelligence, C#, supervised learning, dotnet, dot net, machine learning, mldotnet, ml.net, dotnet core, dotnet 5, .NET 5
C#, C sharp, machine learning, ML.NET, dotnet core, dotnet, O365, Office 365, developer, development, Azure, Supervised Learning, Unsupervised Learning, NLP, Natural Language Programming, Microsoft, SharePoint, Teams, custom software development, sharepoint specialist, chat GPT,artificial intelligence, AI

Cookie Alert

This blog was created and hosted using Google's platform Blogspot (blogger.com). In accordance to privacy policy and GDPR please note the following: Third party vendors, including Google, use cookies to serve ads based on a user's prior visits to your website or other websites. Google's use of advertising cookies enables it and its partners to serve ads to your users based on their visit to your sites and/or other sites on the Internet. Users may opt out of personalized advertising by visiting Ads Settings. (Alternatively, you can opt out of a third-party vendor's use of cookies for personalized advertising by visiting www.aboutads.info.) Google analytics is also used, for more details please refer to Google Analytics privacy policy here: Google Analytics Privacy Policy Any information collected or given during sign up or sign is through Google's blogger platform and is stored by Google. The only Information collected outside of Google's platform is consent that the site uses cookies.