In my previous post I explained several approaches and what you need to know about SQL Server in-database machine learning workloads.
In this post, we have a quick view for workloads with neural networks (deep learning workloads) in SQL Server.
To simplify our samples, here we use well-known MNIST (hand-writing digits) dataset in SQL Server as following table.
This table (MnistImgTbl) consists of 42,000 rows. Each columns “Pixel1” – “Pixel784” represent 28 x 28 (= 784) pixels of hand-writing digits as gray-scaled values [0 – 255], and the column named “Label” is actual digit for each hand-writing images.
In the real scenario, pixel integer values might be generated programmatically from image raw binaries (VARBINARY), but here we start using integer values’ table as follows. (You can download SQL script from here.)
As I explained in my previous post, first you can use standard R functions and frameworks (tensorflow, cntk, caffe, etc) in
Let’s see the following example with MXNetR, but please take care in production use.
For the simplicity (for the purpose of comparing with another sample code later), this sample code uses fully connected feed-forward network (trivial network, not convolutional).
Moreover this code loads all data (42,000 rows) in memory at once, but you can read data in chunk (by changing the value of
@r_rowsPerRead parameter) and apply R functions to each chunk in turn using
rxDataStep() in SQL Server Enterprise. (Here I don’t show this code for chunking, but please see here for MXNetR incremental training. Later I’ll show the sample using RevoScaleR functions in deep learning.)
Note : Before running this code, you must install required packages using RGui utility in SQL Server distributed files.
When you want to install MXNetR with GPU accelerated in Windows, you can refer PowerShell script (install.ps1) for VM extension resource on DSVM (Data Science Virtual Machine).
Note : Almost all frameworks for deep neural network is having some sort of image iterators or file streaming pointers, but these are not available in SQL Server workloads.
DROP TABLE IF EXISTS ResultTbl; CREATE TABLE ResultTbl ( ID int, ActualLabel int, PredictedLabel int ) GO INSERT INTO ResultTbl EXEC sp_execute_external_script @language = N'R', @script = N' require(mxnet) # separate train and test train <- MnistData[1:41000,] test <- MnistData[41001:42000,] # separate label and input (pixels) drops <- c("ID","Label") train.x <- train[,!(names(train) %in% drops)] train.y <- train[,c("Label")] # transform image pixel [0, 255] into [0,1] train.x <- t(train.x/255) # configure network data <- mx.symbol.Variable("data") fc1 <- mx.symbol.FullyConnected(data, name="fc1", num_hidden=128) act1 <- mx.symbol.Activation(fc1, name="relu1", act_type="relu") fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=64) act2 <- mx.symbol.Activation(fc2, name="relu2", act_type="relu") fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=10) softmax <- mx.symbol.SoftmaxOutput(fc3, name="sm") # train ! mx.set.seed(0) model <- mx.model.FeedForward.create( softmax, X=train.x, y=train.y, ctx=mx.cpu(), # cpu only, here num.round=10, array.batch.size=100, array.layout = "colmajor", learning.rate=0.07, momentum=0.9, eval.metric=mx.metric.accuracy, initializer=mx.init.uniform(0.07), epoch.end.callback=mx.callback.log.train.metric(100)) # predict ! drops <- c("ID","Label") test.x <- test[,!(names(test) %in% drops)] test.x <- t(test.x/255) possib <- predict( model, test.x, array.layout = "colmajor") preds <- apply(t(possib), 1, which.max) - 1 OutputDataSet <- data.frame( ID = test$ID, ActualLabel = test$Label, PredictedLabel = preds) ', @input_data_1 = N'SELECT * FROM MnistImgTbl ORDER BY ID ASC', @input_data_1_name = N'MnistData', @parallel = 1, @params = N'@r_rowsPerRead int', @r_rowsPerRead = 50000 GO
It’s so trivial in usual AI workloads and the size of data is also not so large.
But you must remember that SQL Server limits external resources by default and external runtime will probably have insufficient resources for almost tasks, even though the servers nowadays have large scale of RAM.
As a result, unfortunately this code will frequently fail, because of allocation errors.
One of workarounds is to assign more resources for external runtime (see the official document “Advanced analytics on SQL Server – Create a resource pool for machine learning” for doing this settings), but you must care again if data is more large like CIFAR-10, SVHN, ImageNet, etc.
Another workaround is to use RevoScaleR or MicrosoftML functions, which reads data with chunk-by-chunk and scales your ML and deep learning workloads with multiple processes. (See my previous post about RevoScaleR functions and its benefits in SQL Server.)
Now let’s see the next example !
Here we are using
rx_neural_net() in python) function and setting equivalent network using Net# format.
rxNeuralNet() in MicrosoftML, you can take advantages for both data streaming and parallelization in deep learning workloads. Moreover you can use SQL Server compute context and you can invoke in-database workloads from remote client. (The debugging is also easy.)
The following code uses SQL Server compute context.
Note : Sorry, but you cannot use
. (period character) as formula like “
Label ~ . - ID” .
### Set compute context con <- "Driver=SQL Server;Server=18.104.22.168;Database=testdb;Uid=demouser01;Pwd=P@ssw0rd" sqlCompute <- RxInSqlServer( connectionString = con, wait = T, numTasks = 5, consoleOutput = F) rxSetComputeContext(sqlCompute) #rxSetComputeContext("local") # if debugging in local ! ### Train train <- RxSqlServerData( connectionString = con, databaseName = "testdb", sqlQuery = "SELECT TOP 41000 * FROM MnistImgTbl ORDER BY ID ASC", rowsPerRead = 10000, stringsAsFactors = T) net <- ("input Data [28, 28]; hidden H1  rlinear from Data all; hidden H2  rlinear from H1 all; output Out  softmax from H2 all;") model <- rxNeuralNet( Label ~ Pixel1 + Pixel2 + Pixel3 + ... + Pixel101 + Pixel102 + Pixel103 + ... + Pixel201 + Pixel202 + Pixel203 + ... + Pixel301 + Pixel302 + Pixel303 + ... + Pixel401 + Pixel402 + Pixel403 + ... + Pixel501 + Pixel502 + Pixel503 + ... + Pixel601 + Pixel602 + Pixel603 + ... + Pixel701 + Pixel702 + ..... + Pixel784, data = train, netDefinition = net, #acceleration = "gpu", # when using GPU type = "multiClass" ) ### Predict test <- RxSqlServerData( connectionString = con, databaseName = "testdb", sqlQuery = "SELECT TOP 1000 * FROM MnistImgTbl ORDER BY ID DESC", rowsPerRead = 10000, stringsAsFactors = T) results <- rxPredict( model, data = test, extraVarsToWrite = c("ID","Label") )
By using SQL Server Enterprise, RevoScaleR and MicrosoftML functions work on each 10,000 rows and can operate a large volume of data without limits. As a result, you can easily get the following data frame (variable:
results) as predicted results without any configuration changes.
Note : Net# doesn’t support some expressions like dropout or chain of connections. If you need these expressions, please use the former approach.
See “Guide to Net# neural network specification language for Azure Machine Learning” for details about Net#.