Lstm autoencoder tensorflow. This doesn't seem to fit into the notion .

Lstm autoencoder tensorflow. Check the range of your data X_train and X_test.
Lstm autoencoder tensorflow AI deep learning neural network for anomaly detection using Python, Keras and TensorFlow - BLarzalere/LSTM-Autoencoder-for-Anomaly-Detection Jun 3, 2019 · Not sure what you mean. 28. keras azure-machine-learning keras-tensorflow anomaly-detection lstm-autoencoder. GRU: A type of RNN with size units=rnn_units (You can also use an LSTM layer here. ) tf. This is the implementation of LSTM-based Staked Autoencoder (LSTM-SAE) model This model is mentioned in paper by the title: Unsupervised Pre-training of a Deep LSTM-based Stacked Autoencoder for Multivariate Time Series Forecasting Problems Jul 30, 2017 · Now available on Stack Overflow for Teams! AI features where you work: search, IDE, and chat. 6. x %pylab inlineLoading the Data from tensorflow import keras. 0 and Keras 2. x and Keras, a high-level API built on TensorFlow. js TensorFlow Lite TFX All libraries RESOURCES Models & datasets Tools Responsible AI Recommendation systems Groups Contribute Blog Forum About Case studies Sep 1, 2023 · Although the LSTM based autoencoder is also used in Dutta et al. はじめにこの記事はKerasのLSTMのフィードフォワードをnumpyで実装するの続きみたいなものです.KerasでLSTM AutoEncoderを実装し,得られた特徴量から2値分類を試し… Lstm variational auto-encoder for time series anomaly detection and features extraction - TimyadNyda/Variational-Lstm-Autoencoder May 17, 2019 · What we are looking for here is, In the original data, y = 1 at row 257. 0], [12. Sep 21, 2021 · In this article, we explore Autoencoders, their structure, variations (convolutional autoencoder) & we present 3 implementations using TensorFlow and Keras. Oct 16, 2020 · In summary, we’ve explored how to build and apply a 2D LSTM Autoencoder. the data is compres. 0, use the following pip install command, pip install tensorflow==2. Oct 20, 2020 · Neural networks like Long Short-Term Memory (LSTM) recurrent neural networks are able to almost seamlessly model problems with multiple input variables. One of the key factors in anomaly detection is balancing the trade-off between sensitivity and specificity. So far I have The model used for this task was an LSTM Autoencoder. A little bit of context first: I am developing the model using DeepNote and according to the terminal the installed TensorFlow version is 2. T his implementation will make direct use of the tensorflow coreAPI, which requires some prerequisite knowledge; I’ll briefly explain three fundamental concepts: the tf. 6 shows how to load the model Nov 10, 2017 · Yes, and the summary of the model can show that the output of the LSTM layer is (None, None, 128), but when it comes to fitting, it becomes (25000, 1), which is quite odd. Learn more Explore Teams Jul 19, 2020 · In a recent post, we showed how an LSTM autoencoder, regularized by false nearest neighbors (FNN) loss, can be used to reconstruct the attractor of a nonlinear, chaotic dynamical system. Sep 2, 2024 · To get started, we’ll use TensorFlow 2. The goal is for the Autoencoder to reconstruct multivariate time-series. 1) Versions… TensorFlow. (2021), but our proposed model is structurally different as it uses three LSTM layers in each encoder and decoder block to better capture the encoded feature representation as compared to Dutta et al. N Similar to LSTM AE model, LSTM-VAE is also a reconstruction-based anomaly detection model, which consists of a pair of encoder and decoder. Let’s get started. Gradient clipping appears to choke on None. autoencoder를 사용한 이상 탐지에 대해 자세히 알아보려면 Victor Dibia가 TensorFlow. The encoder is comprised of a LSTM network and two linear neural networks to estimate the mean and co-variance of the latent variable z. random import seed from tensorflow import set_random_seed import tensorflow as tf tf Apr 26, 2018 · An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data. These are the log-likelihood of each character according to the model. Feb 21, 2023 · Now, we can prepare some tensorflow datasets (I love using these!) remembering that the input data and target data for an autoencoder should be the same. What is an LSTM Autoencoder? In model 1, each point of 77 features is compressed and decompressed this way: 77->16->16->77 plus some info from the previous steps. Here, we explore how that same technique assists in prediction. Then, feed it into the LSTM. LSTM). The model has to find a way to “compress Jan 19, 2024 · This tutorial demonstrates how to generate images of handwritten digits using graph mode execution in TensorFlow 2. com/alind-saxena/Anomaly_Detection/blob/main/Data%20Science/Anomaly%20Detection%20On%20Time%20Series%20Data%20-%20LSTM%20 Oct 4, 2020 · tensorflow; keras; lstm; autoencoder; Share. Here is a minimal example: # latent_dim: int, latent z-layer shape. We’ll be using the MNIST dataset, which is readily available in Keras. js TensorFlow Lite TFX LIBRARIES TensorFlow. Can only be run on GPU, with the TensorFlow backend. An AutoEncoder is a data compression and decompression algorithm implemented with Neural Networks and/or Convolutional Neural Networks. Make sure they're not too big. Attention and I'd like to use it Aug 20, 2024 · TensorFlow (v2. Make Predictions. My question is: "does it need to be an autoencoder"? (An autoencoder has the goal of recreating the input data as a smaller condensed unintelligible data for further usage in other models. Aug 27, 2020 · What Is an LSTM Autoencoder? An LSTM Autoencoder is an implementation of an autoencoder for sequence data using an Encoder-Decoder LSTM architecture. I saw that Keras has a layer for that tensorflow. We will go over the input and output flow between the layers, and also, compare the LSTM Autoencoder with a regular LSTM network. But the RepeatVector will need some hacking taking dimensions directly from the input tensor. autoencoder is learning crucial patterns and with the use of LSTM, it can learn patterns on series data, Thus making it a superior solution to the common Jan 7, 2021 · Defining the Keras model. Follow edited Oct 5, 2020 at 7:12. org/abs/1502. An autoencoder is a special type of neural network that is trained to copy its input to its output. However, my input for each time step in the LSTM layer is a vector of dimension 13. js로 빌드한 훌륭한 대화형 예제를 확인하세요. 0 May 14, 2016 · To build a LSTM-based autoencoder, first use a LSTM encoder to turn your input sequences into a single vector that contains information about the entire sequence, then repeat this vector n times (where n is the number of timesteps in the output sequence), and run a LSTM decoder to turn this constant sequence into the target sequence. LSTM networks are a sub-type of the more general recurrent neural networks (RNN). An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. 0) Here is a small test you can run to check the influence of each seed (with np being numpy, tf being tensorflow and random the Python random library): Overview; LogicalDevice; LogicalDeviceConfiguration; PhysicalDevice; experimental_connect_to_cluster; experimental_connect_to_host; experimental_functions_run_eagerly Jun 13, 2021 · Training autoencoder for variant length time series - Tensorflow Hot Network Questions Does it make sense to create a confidence interval referencing the Z-distribution if we know the population distribution isn't normal? LSTM Auto-Encoder (LSTM-AE) implementation in Pytorch - matanle51/LSTM_AutoEncoder I try to align them with no reshape layer. Only the coolest kids use this one — Bill Gates. But using autoencoder, which have many variables with strong correlations, is said to cause a decline of detection power. I'm interested in moving the Project GitHub Link: https://github. pyplot as plt %matplotlib inline from numpy. Now one problem will raise because you will have to fit the inputs into numpy array, which has a strict structure (same length). , digit) from the Jun 17, 2021 · I am able to fit this autoencoder to my sequence in order to reconstruct it. set(color_codes=True) import matplotlib. 2k 25 25 gold badges 74 74 silver badges 81 81 Jun 7, 2019 · Tensorflow Low Level Implementation. 5. In this tutorial, you will discover how you can […] Dec 2, 2019 · # import libraries import os import pandas as pd import numpy as np from sklearn. The first layer is an Embedding layer, which learns a word embedding that in our case has a dimensionality of 15. When you use a stacked version of LSTM like an autoencoder, you need to pass around the hidden and cell states from the encoder to the decoder for the decoder to have Jan 31, 2018 · I am trying to use batch normalization in LSTM using keras in R. Many factors, such as malfunctioning hardware, malevolent activities, or modifications to the data’s underlying distribution, might cause anomalies. Aug 16, 2024 · In this tutorial, you will use an RNN layer called Long Short-Term Memory (tf. However, how would I be able to walk this forward 3 timesteps to get [[11. With lookback = 5 we want the LSTM to look at the 5 rows before row 257 (including itself). (2021) where two LSTM layers are used in encoder and decoder part. Check the range of your data X_train and X_test. Shaido. If you’re using Google Colab or Jupyter, you can begin with the following setup: %tensorflow_version 2. Oct 15, 2021 · I'm currently playing around with a basic LSTM-based Autoencoder using the Tensorflow library. Let me explain this in following example and show 2 solutions to achieve masking in LSTM-autoencoder. data. This guide will show you how to build an Anomaly Detection model for Time Series data. I have encoded every sentence into a sequence of numbers, with each number representing a letter. An important constructor argument for all Keras RNN layers, such as tf. 0], [13. Aug 16, 2024 · Save and categorize content based on your preferences. keras. 4. Wir trainieren den Autoencoder mit einer Reihe von normalen EKGs und testen ihn dann auf einer Reihe von anomalen EKGs. We will learn the architecture and working of an autoencoder by building and training a simple autoencoder using the classical MNIST dataset in this article. Mar 31, 2021 · compression time-series tensorflow rnn convolutional-neural-networks convolutional-autoencoder lstm-neural-networks time-series-prediction time-series-forecasting lstm-autoencoder time-series-anomaly-detection Feb 3, 2020 · In this post I want to illustrate a problem I have been thinking about in time series forecasting, while simultaneously showing how to properly use some Tensorflow features which greatly help in this setting (specifically, the tf. keras and how to run it on Kaggle GPUs/TPUs. LSTM Autoencoder implementation with TensorFlow. e. g. It outputs one logit for each character in the vocabulary. The code supports Deep Supervision, Autoencoder mode, Guided Attention, Bi-Directional Convolutional LSTM and other options explained in the codes and demos. Star 1. Sep 6, 2015 · Unlike what has been said before, only Tensorflow seed has an effect on random generation of weights (latest version Tensorflow 2. Aug 15, 2016 · 下記の論文において、動画像系列のためのLSTMを用いた教師なし学習が提案された: Nitish Srivastava, Elman Mansimov, Ruslan Salakhutdinov, "Unsupervised Learning of Video Representations using LSTMs," Proceedings of The 32nd International Conference on Machine Learning, pp. What is an LSTM Autoencoder? Sep 25, 2019 · Here, we will use Long Short-Term Memory (LSTM) neural network cells in our autoencoder model. NaN loss in tensorflow LSTM model. Tensor. Sep 20, 2019 · I am trying an autoencoder model with LSTM layers in Keras for text outlier detection. h:187] Compiled cluster using XLA! Apr 8, 2016 · Explosion in loss function, LSTM autoencoder. – BridgeMia Commented Nov 10, 2017 at 8:26 A TensorFlow implementation of a Bidirectional Phased LSTM network for sequence classification tasks. This code is based on this paper: https://arxiv. In this LSTM autoencoder version, the decoder part is capable of producing, from an encoded version, as many timesteps as desired, serving the purposes of also predicting future steps. The docs say: Boolean. Autoencoders on the other hand learn efficient data encodings in order to e. Dataset class and Keras’ functional API). In the 3D array, X, each 2D block at X[i,:,:] denotes the prediction data that corresponds to y[i] . Contribute to peytonhong/LSTMAutoencoder development by creating an account on GitHub. This technique can be utilized in various applications, including noise removal, feature extraction (using only the Aug 3, 2020 · Figure 1. Despite selecting the accelerator, sometimes it may not run, and we will discuss possible solutions to this issue. Improve this question. js TensorFlow Lite TFX All libraries RESOURCES Models & datasets Tools Responsible AI Recommendation systems Groups Contribute Blog Forum About Case studies May 31, 2024 · tf. LSTM , is the return_sequences argument. Specifically, we shall discuss the subclassing API implementation of an autoencoder. 2: Plot of loss/accuracy vs epoch. It seems that replacing LSTMs with just TimeDistributed(Dense()) may also work in this case, but cannot say for sure as I don't know the data. The code listing 1. Convolutional layer – it doesn’t have to learn dense layers to use CNN or LSTM. Image source: Andrej Karpathy Jul 17, 2017 · 1. ) – Dec 8, 2020 · Specifically what spurred this question is the return_sequence argument of TensorFlow's version of an LSTM layer. Example : You have a 2D tensor input that represents a sequence (timesteps, dim_features), if you apply a dense layer to it with new_dim outputs, the tensor that you will have after the layer will be a new sequence (timesteps, new_dim) Explore and run machine learning code with Kaggle Notebooks | Using data from Predict Future Sales TensorFlow (v2. 1 with Keras version 2. Whether to return the last output. オートエンコーダによる異常検出の詳細については、Victor Dibia が TensorFlow. According to the TensorFlow build instructions, to have a working TensorFlow GPU backend, you will need CuDNN: The following NVIDIA software must be installed on your system: For anomaly detection, autoencoder is widely used. May 13, 2024 · Anomaly detection is the process of detecting unusual or unforeseen patterns or events in data. Aug 31, 2020 · **There is a way to handle that in LSTM architecture: ** In you lstm set the timestep component of input_shape argument as None, this will help you accept sequence of variable length. This tutorial introduces autoencoders with three examples: the basics, image denoising, and anomaly detection. Sep 19, 2022 · whose two combine making an autoencoder. Sep 21, 2020 · you need to infer the batch_dim inside the sampling function and you need to pay attention to your loss your loss function uses the output of previous layers so you need to take care of this. Mar 20, 2019 · This post is a humble attempt to contribute to the body of working TensorFlow 2. 0 by training an Autoencoder. 16. Specifically, it uses a bidirectional LSTM (but it can be configured to use a simple LSTM instead). 8419 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1700346169. (The code works with tensorflow, not sure about theano) Jul 13, 2019 · The encoder as you have defined it is a model, and it consists of two layers: an input layer and the 'encoder_lstm' layer which is the bidirectional LSTM layer in the autoencoder. I have shied away from the typical LSTM Autoencoder structure, wher The autoencoder is implemented with Tensorflow. . decoder_input = Input(shape=(latent_dim,)) _h_decoded = RepeatVector(timesteps)(decoder_input) decoder_h = LSTM(intermediate_dim, return_sequences=True) _h_decoded = decoder_h(_h_decoded) decoder_mean = LSTM(input_dim, return_sequences Feb 24, 2020 · Figure 4: The results of removing noise from MNIST images using a denoising autoencoder trained with Keras, TensorFlow, and Deep Learning. The dense layer can take sequences as input and it will apply the same dense layer on every vector (last dimension). Jan 27, 2020 · I am attempting to create an LSTM denoising autoencoder for use on long time series (100,000+ points) in Python using Tensorflow. If I were to use an LSTM with attention, my latent representation would have to be all hidden states per time step. externals import joblib import seaborn as sns sns. preprocessing import MinMaxScaler from sklearn. or if you have a GPU in your system, pip install tensorflow-gpu==2. 3 (Linux) Aug 16, 2024 · Save and categorize content based on your preferences. classify images - or more precisely determine a specific occurrence or non-occurrence. Sep 30, 2017 · You can use shape=(None, input_dim). Sep 7, 2020 · The steps we will follow to detect anomalies in Johnson & Johnson stock price data using an LSTM autoencoder: Train an LSTM autoencoder on the Johnson & Johnson’s stock price data from 1985–09–04 to 2013–09–03. Jun 3, 2022 · An extension of autoencoder known as variational autoencoder can be used to generate potentially a new image dataset from an available set of images. Overview of the Apr 1, 2022 · In order to solve such above-mentioned problems in power load forecasting, this research proposes an LSTM-Autoencoder model that combines long-term and short-term features (LS-LSTM-AE) for power load forecasting, and has made the following contributions: Jan 19, 2020 · I'd like to implement an encoder-decoder architecture based on a LSTM or GRU with an attention layer. Dense: The output layer, with vocab_size outputs. Now that we have a trained autoencoder model, we will use it to make predictions. You’ll learn how to use LSTMs and Autoencoders in Keras and TensorFlow 2. 실제 사용 사례의 경우, TensorFlow를 사용하여 Airbus가 ISS 원격 측정 데이터에서 이상을 감지하는 방법을 Aug 22, 2020 · As the message clearly says, it's the shape issue which you are passing to the model for fit. Matched up with a comparable, capacity-wise, "vanilla LSTM", FNN-LSTM improves performance on a set of very different, real-world datasets Jun 11, 2020 · What we are looking for here is, In the original data, y = 1 at row 257. Are you thinking of having the same word embeddings that are being trained as the output targets? That's actually what I believe will be unstable. Installation It is required keras , tensorflow under the hood, pandas for the example and pyfolder for save/load of the trained model. 0 examples. Keras LSTM implementation expect a input of type: (Batch, Timesteps, Features). TensorFlow LSTM-autoencoder implementation Usage # hidden_num : the number of hidden units in each RNN-cell # inputs : a list of tensor with size (batch_num x step_num x elem_num) ae = LSTMAutoencoder ( hidden_num , inputs ) Mar 26, 2024 · In this article, we will explore how to create an LSTM (Long Short-Term Memory) autoencoder using TensorFlow. ; In the 3D array, X, each 2D block at X[i,:,:] denotes the prediction data that corresponds to y[i]. Der Autoencoder soll in der Lage sein, die normalen EKGs erfolgreich zu reproduzieren, aber die anomalen EKGs nicht. From the above data which you have given X is having the shape of (6, 3, 2) and Y is having the shape of (6, 2) which is incompatible. create the model using the already trained word embedding: in this scenario, I used the weig Mar 3, 2023 · Autoencoder variations explained, common applications and their use in NLP, how to use them for anomaly detection and Python implementation in TensorFlow An autoencoder is a neural network trained Each layer in Keras has an input_mask and output_mask, the mask was already lost right after the first LSTM layer (when return_sequence = False) in your example. 8. Jun 24, 2021 · I am trying to build an LSTM autoencoder for the compression of time series (currently only one dimensional, but could also be for multiple dimensions). The system utilizing Long Short-Term Memory (LSTM) networks and autoencoders, optimized for Arduino microcontrollers using quantization techniques and integrated with TensorFlow Lite for real-time implementation. Jul 8, 2019 · So I have been working on LSTM Autoencoder model. As we are using the Sequential API, we can initialize the model variable with Sequential(). layers. Jun 4, 2019 · Here we will break down an LSTM autoencoder network to understand them layer-by-layer. In diesem Projekt verwenden wir einen LSTM-Autoencoder, um Anomalien in EKGs zu erkennen. In the encoder step, the LSTM reads the whole input sequence; its outputs at each time step are ignored. 1. LSTM-Autoencoder and LSTM-Predictor in Tensorflow - GitHub - IzPerfect/LSTM-Autoencoder: LSTM-Autoencoder and LSTM-Predictor in Tensorflow Jan 10, 2023 · Implementing Long Short-Term Memory (LSTM) networks in R involves using libraries that support deep learning frameworks like TensorFlow or Keras. in the output sequence, or the full sequence. This is a great benefit in time series forecasting, where classical linear methods can be difficult to adapt to multivariate or multiple input forecasting problems. Jan 10, 2022 · So the answer is No because apart from dealing with non-linear data, Autoencoder provides different applications from Computer vision to time series forecasting. LSTM is a Neural Network capable of modeling short and long term dependanceies in data, therefore its use for time series data is justified. Balancing these trade-offs requires Jan 22, 2019 · Using RepeatVector you can repeat the latent output n times. It is my belief that Keras automatically uses the GPU wherever possible. One solution would be to set Timesteps = 1 and pass the sequence lengths as the Batch dimensions. This implementation includes attention mechanisms, peepholes, and supports both CUDA-optimized and standard LSTM cells. These frameworks provide high-level interfaces for efficiently building and training LSTM models. 843–852, 2015. Non-linear Transformations – it can learn non-linear activation functions and multiple layers. To install TensorFlow 2. And each sample has variable-length of these vectors, which means the time step is not constant for each sample. Something between -4 and +4 is sort of good. Variable,and tf. For a given dataset of sequences, an encoder-decoder LSTM is configured to read the input sequence, encode it, decode it, and recreate it. 04681. We assume that there were no anomalies and they were normal. Here's a step-by-step guide to implementing LSTM using R Fast LSTM implementation backed by CuDNN. js で構築したこの優れたインタラクティブな例をご覧ください。実際の使用例については、TensorFlow を使用してAirbus が ISS テレメトリデータの異常を検出する方法を参照してください。 Jun 20, 2017 · This autoencoder consists of two parts: LSTM Encoder: Takes a sequence and returns an output vector (return_sequences = False) LSTM Decoder: Takes an output vector and returns a sequence (return_sequences = True) So, in the end, the encoder is a many to one LSTM and the decoder is a one to many LSTM. 3. I have also created various version of this model. In my dataset the target/output variable is the Sales column, and every row in the dataset records the Sales for each day in a year This repository contains 1D and 2D Signal Segmentation Model Builder for UNet, several of its variants and other models developed in Tensorflow-Keras. A key attribute of recurrent neural networks is their ability to persist information, or cell state, for use later in the network. 0]]? import numpy as np from k For example, if I use an LSTM without attention, the "classic" approach is to use the last hidden state as the context vector - it should represent the main features of my input sequence. Updated Jul 13, 2020; Python; 034adarsh / TadGAN-Research. Code Nov 1, 2017 · Any model. We’ll use the model to find anomalies in S&P 500 daily closing prices. To avoid the above problem, the technique to apply L1 regularization to LSTM autoencoder is advocated in the below paper. We can then define the Keras model. placeholder, tf. Mar 20, 2020 · Don't use 'relu' for LSTM, leave the standard activation which is 'tanh'. This is the plan: Anomaly Detection; LSTM Autoencoders; S&P 500 Index Data; LSTM Autoencoder in Keras; Finding LSTM Autoencoder for time-series prediction written in TensorFlow. On the left we have the original MNIST digits that we added noise to while on the right we have the output of the denoising autoencoder — we can clearly see that the denoising autoencoder was able to recover the original signal (i. Because LSTM's are "recurrent", it's very easy for them to accumulate growing or decreasing of values to a point of making the numbers useless. 474466 1961179 device_compiler. Aug 12, 2022 · LSTMs are generally applied to sequential data, like time series or music scores (). This doesn't seem to fit into the notion May 31, 2020 · Epoch 1/50 26/27 ━━━━━━━━━━━━━━━━━━━ [37m━ 0s 4ms/step - loss: 0. 0. uesvh kdb yapp aycephes ttcm irdbl lihhrwb muqnj tjct cymg
{"Title":"What is the best girl name?","Description":"Wheel of girl names","FontSize":7,"LabelsList":["Emma","Olivia","Isabel","Sophie","Charlotte","Mia","Amelia","Harper","Evelyn","Abigail","Emily","Elizabeth","Mila","Ella","Avery","Camilla","Aria","Scarlett","Victoria","Madison","Luna","Grace","Chloe","Penelope","Riley","Zoey","Nora","Lily","Eleanor","Hannah","Lillian","Addison","Aubrey","Ellie","Stella","Natalia","Zoe","Leah","Hazel","Aurora","Savannah","Brooklyn","Bella","Claire","Skylar","Lucy","Paisley","Everly","Anna","Caroline","Nova","Genesis","Emelia","Kennedy","Maya","Willow","Kinsley","Naomi","Sarah","Allison","Gabriella","Madelyn","Cora","Eva","Serenity","Autumn","Hailey","Gianna","Valentina","Eliana","Quinn","Nevaeh","Sadie","Linda","Alexa","Josephine","Emery","Julia","Delilah","Arianna","Vivian","Kaylee","Sophie","Brielle","Madeline","Hadley","Ibby","Sam","Madie","Maria","Amanda","Ayaana","Rachel","Ashley","Alyssa","Keara","Rihanna","Brianna","Kassandra","Laura","Summer","Chelsea","Megan","Jordan"],"Style":{"_id":null,"Type":0,"Colors":["#f44336","#710d06","#9c27b0","#3e1046","#03a9f4","#014462","#009688","#003c36","#8bc34a","#38511b","#ffeb3b","#7e7100","#ff9800","#663d00","#607d8b","#263238","#e91e63","#600927","#673ab7","#291749","#2196f3","#063d69","#00bcd4","#004b55","#4caf50","#1e4620","#cddc39","#575e11","#ffc107","#694f00","#9e9e9e","#3f3f3f","#3f51b5","#192048","#ff5722","#741c00","#795548","#30221d"],"Data":[[0,1],[2,3],[4,5],[6,7],[8,9],[10,11],[12,13],[14,15],[16,17],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[30,31],[0,1],[2,3],[32,33],[4,5],[6,7],[8,9],[10,11],[12,13],[14,15],[16,17],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[34,35],[30,31],[0,1],[2,3],[32,33],[4,5],[6,7],[10,11],[12,13],[14,15],[16,17],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[34,35],[30,31],[0,1],[2,3],[32,33],[6,7],[8,9],[10,11],[12,13],[16,17],[20,21],[22,23],[26,27],[28,29],[30,31],[0,1],[2,3],[32,33],[4,5],[6,7],[8,9],[10,11],[12,13],[14,15],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[34,35],[30,31],[0,1],[2,3],[32,33],[4,5],[6,7],[8,9],[10,11],[12,13],[36,37],[14,15],[16,17],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[34,35],[30,31],[2,3],[32,33],[4,5],[6,7]],"Space":null},"ColorLock":null,"LabelRepeat":1,"ThumbnailUrl":"","Confirmed":true,"TextDisplayType":null,"Flagged":false,"DateModified":"2020-02-05T05:14:","CategoryId":3,"Weights":[],"WheelKey":"what-is-the-best-girl-name"}