Uncategorized

Run FPGA Accelerated Serving (“Project Brainwave”)

Azure Machine Learning Hardware Accelerated Models (Project “Brainwave”) provides hardware accelerated machine learning with field programmable gate array or FPGA. With Brainwave, you can provide real-time inference (online prediction) without mini-batch for mission critical applications on IoT devices or on the cloud, even when it has a huge trained model and large dataset like images.

Let’s imagine such like autonomous vehicle detection, machinery in the factory line, and so on and so forth. If it might take more than hundreds of milliseconds for inference, it will cause some serious automobile accidents. (Though this case cannot be implemented on the cloud …) Real-time AI is very hot topic to achieve these mission critical requirements.
FPGA is cost effective compared with GPU. Another advantage of FPGA (compared with GPU) is that you can take high performance without batch execution. As you can see later in this post, you can take high throughputs, even if it’s many of single inference, using FPGA.
Therefore FPGA is one of more realistic and practical ones, though there exist several options for accomplishing real-time AI.

Now you can try Project Brainwave Preview with Intel’s FPGA devices in the cloud as one of new features (which are announced in Microsoft Build 2018) in Azure Machine Learning. As you can see later, Project Brainwave is based on transfer learning and you can easily try your FPGA-enabled serving using Azure cloud. (It will enable you to deploy your service on the edge (IoT devices) in the future, but now in the cloud.)
In current preview, it’s limited for available frameworks (now only TensorFlow deployment) and available models (now only ResNet-50 for image featurization), but there’s plan to support more gallery models and other frameworks like CNTK, Caffe, etc in the future. (See video “Build 2018 : Hyperscale hardware: ML at scale on top of Azure + FPGA” for details.)

The overall flow for creating FPGA service is here :

  1. Now FPGA-enabled service provides only image featurizing with pre-trained models (resnet50, etc). You must build other graphs (data input, classification, etc) with tensorflow and define pipeline with these steps (input -> featurize -> classify) using service definition json file.
    Generated definition and graphs must be zipped in one archive.
    (See ModelDefinition() in quickstart sample.)
  2. Put the previous zip file in blob storage and register this model using Azure ML model management api.
    (See Model.register() in quickstart sample.)
  3. Deploy (create) your service with the registered model using Azure ML model management api. The service is published as web service with gRPC.
    (See WebService.deploy_from_model() in quickstart sample.)

In the Quickstart tutorial sample, you can use useful helper classes or functions which encapsulate boilerplate code to achieve these steps with python.
In this post I show you the steps along with the quickstart sample without these helpers to make you understand new FPGA-enabled services. With this sample, you can understand how it’s working behind the scenes.

Build Entire Graph

Here we’re going to build the following graph (see the following picture) for the prediction of image classification.
Now “Image Featurizer” part by resnet50 is already deployed in brainwave, and then you must build or bring other “Transform Input” and “Image Classifier” graphs by your own.

For instance, the following is our sample code for the above “Transform Input” graph. This code is reading image binary and transforming into the shape (n, 224, 224, 3) of float values (224 x 224 images with 3 channels by red, green, and blue), and finally applying the VGG preprocess.
The quickstart sample in GitHub repo is resizing into 244 x 244, but, for the simplicity, here we assume that input image has always the size of 244 x 244.

The generated model file (input_trans.pb) is used for model provisioning later.

import os
import tensorflow as tf

transform_model_file = '/home/tsmatsuz/test/input_trans.pb'

#
# Define transform process
#
transformGraph = tf.Graph()
with transformGraph.as_default():
  in_images = tf.placeholder(tf.string, name='in_images')
  decoded_input = tf.image.decode_png(in_images, channels=3)
  float_input = tf.cast(decoded_input, dtype=tf.float32)
  # (224, 224, 3) -> (n, 224, 224, 3)
  rgb_input = tf.expand_dims(
    float_input,
    axis=0)
  # For VGG preprocess, reduce means and convert to BGR
  slice_red = tf.slice(
    rgb_input,
    [0, 0, 0, 0],
    [1, 224, 224, 1])
  slice_green = tf.slice(
    rgb_input,
    [0, 0, 0, 1],
    [1, 224, 224, 1])
  slice_blue = tf.slice(
    rgb_input,
    [0, 0, 0, 2],
    [1, 224, 224, 1])
  sub_red = tf.subtract(slice_red, 123.68)
  sub_green = tf.subtract(slice_green, 116.779)
  sub_blue = tf.subtract(slice_blue, 103.939)
  transferred_input = tf.concat(
    [sub_blue, sub_green, sub_red],
    3,
    name='transferred_input')
  print(in_images.name)
  print(transferred_input.name)

#
# Save graph
#
with tf.Session(graph=transformGraph) as sess1:
  with tf.gfile.GFile(transform_model_file, 'wb') as f:
    f.write(sess1.graph_def.SerializeToString())
print('saved input transform graph')

For “Image Classifier” part, you can also build your own classification graph and use it inside your FPGA-enabled service. For instance, you can build custom classification to distinguish only “Dog” or “Cat” and the trained classification is used in your FPGA service.
But in this post we download pre-built classification graph trained by ImageNet dataset and we use this downloaded model (resnet50_classifier.pb) for our “Image Classifier” part. (You can download resnet50_classifier.pb from here.)

You can also download “Image Featurizer” graph (resnet50.pb) which is the same graph on FPGA (download from here).
Now let’s combine these graphs (3 graphs of “Transform Input”, “Image Featurizer”, and “Image Classifier”) and run the entire process in your local machine for debugging as follows.

import os
import time
import requests

import tensorflow as tf

#
# Load transform graph def (previously saved graph)
#
transform_model_file = '/home/tsmatsuz/test/input_trans.pb'
transform_graph_def = tf.GraphDef()
with tf.gfile.Open(transform_model_file, 'rb') as f:
  data = f.read()
  transform_graph_def.ParseFromString(data)
# import as current default Graph
tf.import_graph_def(
  transform_graph_def,
  name='transform_graph')
# get input tensor
in_images_tensor = tf.get_default_graph().get_tensor_by_name(
  'transform_graph/in_images:0')
# get output tensor
transferred_input_tensor = tf.get_default_graph().get_tensor_by_name(
  'transform_graph/transferred_input:0')
print('loaded input transform graph')

#
# Load featurizer graph def
#
featurizer_model_file = '/home/tsmatsuz/test/resnet50.pb'
featurizer_graph_def = tf.GraphDef()
with tf.gfile.Open(featurizer_model_file, 'rb') as f:
  data = f.read()
  featurizer_graph_def.ParseFromString(data)
# import as current default Graph
tf.import_graph_def(
  featurizer_graph_def,
  input_map = {'InputImage': transferred_input_tensor}, # set input tensor
  name='featurizer_graph')
# get output tensor
featurizer_output_tensor = tf.get_default_graph().get_tensor_by_name(
  'featurizer_graph/resnet_v1_50/pool5:0')
print('loaded featurizer graph')

#
# Load classification graph def
#
classifier_model_file = '/home/tsmatsuz/test/resnet50_classifier.pb'
classifier_graph_def = tf.GraphDef()
with tf.gfile.Open(classifier_model_file, 'rb') as f:
  data = f.read()
  classifier_graph_def.ParseFromString(data)
# import as current default Graph
tf.import_graph_def(
  classifier_graph_def,
  input_map = {'Input': featurizer_output_tensor}, # set input tensor
  name='classifier_graph')
classifier_output_tensor = tf.get_default_graph().get_tensor_by_name(
  'classifier_graph/resnet_v1_50/logits/Softmax:0')
print('loaded classifier graph')

#
# Predict image ! (Check result)
#
with tf.Session() as sess2:
  with open('/home/tsmatsuz/test/tiger224x224.jpg', 'rb') as f:
    data = f.read()
    feed_dict = {
      in_images_tensor: data
    }
    print('started inference')
    start_time = time.process_time()
    result = sess2.run([classifier_output_tensor], feed_dict=feed_dict)
    stop_time = time.process_time()
    print('finished inference')
    # list -> 1 x n ndarray : result's format is [[1.16643378e-06 3.12126781e-06 3.39836406e-05 ... ]]
    nd_result = result[0]
    # remove row's dimension
    onedim_result = nd_result[0,]
    # set column index to array of possibilities 
    indexed_result = enumerate(onedim_result)
    # sort with possibilities
    sorted_result = sorted(indexed_result, key=lambda x: x[1], reverse=True)
    # get the names of top 5 possibilities
    classes_entries = requests.get('https://raw.githubusercontent.com/Lasagne/Recipes/master/examples/resnet50/imagenet_classes.txt').text.splitlines()
    for top in sorted_result[:5]:
      print(classes_entries[top[0]], 'confidence:', top[1])
    print('{:.2f} milliseconds'.format((stop_time-start_time)*1000))

  with open('/home/tsmatsuz/test/lion224x224.jpg', 'rb') as f:
    data = f.read()
    feed_dict = {
      in_images_tensor: data
    }
    print('started inference')
    start_time = time.process_time()
    result = sess2.run([classifier_output_tensor], feed_dict=feed_dict)
    stop_time = time.process_time()
    print('finished inference')
    # list -> 1 x n ndarray : result's format is [[1.16643378e-06 3.12126781e-06 3.39836406e-05 ... ]]
    nd_result = result[0]
    # remove row's dimension
    onedim_result = nd_result[0,]
    # set column index to array of possibilities 
    indexed_result = enumerate(onedim_result)
    # sort with possibilities
    sorted_result = sorted(indexed_result, key=lambda x: x[1], reverse=True)
    # get the names of top 5 possibilities
    classes_entries = requests.get('https://raw.githubusercontent.com/Lasagne/Recipes/master/examples/resnet50/imagenet_classes.txt').text.splitlines()
    for top in sorted_result[:5]:
      print(classes_entries[top[0]], 'confidence:', top[1])
    print('{:.2f} milliseconds'.format((stop_time-start_time)*1000))

  with open('/home/tsmatsuz/test/orangutan224x224.jpg', 'rb') as f:
    data = f.read()
    feed_dict = {
      in_images_tensor: data
    }
    print('started inference')
    start_time = time.process_time()
    result = sess2.run([classifier_output_tensor], feed_dict=feed_dict)
    stop_time = time.process_time()
    print('finished inference')
    # list -> 1 x n ndarray : result's format is [[1.16643378e-06 3.12126781e-06 3.39836406e-05 ... ]]
    nd_result = result[0]
    # remove row's dimension
    onedim_result = nd_result[0,]
    # set column index to array of possibilities 
    indexed_result = enumerate(onedim_result)
    # sort with possibilities
    sorted_result = sorted(indexed_result, key=lambda x: x[1], reverse=True)
    # get the names of top 5 possibilities
    classes_entries = requests.get('https://raw.githubusercontent.com/Lasagne/Recipes/master/examples/resnet50/imagenet_classes.txt').text.splitlines()
    for top in sorted_result[:5]:
      print(classes_entries[top[0]], 'confidence:', top[1])
    print('{:.2f} milliseconds'.format((stop_time-start_time)*1000))

The following are the results of this code. Please download the following images on your local machine and try it by yourself !

tiger, Panthera tigris confidence: 0.98686683
tiger cat confidence: 0.012111436
zebra confidence: 0.00042723425
lynx, catamount confidence: 0.00028331927
jaguar, panther, Panthera onca, Felis onca confidence: 0.00011977311

lion, king of beasts, Panthera leo confidence: 0.9965641
chow, chow chow confidence: 0.0018761579
Tibetan mastiff confidence: 0.000651932
bison confidence: 0.00016523586
ox confidence: 0.00015732956

orangutan, orang, orangutang, Pongo pygmaeus confidence: 0.71014076
chimpanzee, chimp, Pan troglodytes confidence: 0.1675623
gorilla, Gorilla gorilla confidence: 0.045721605
hippopotamus, hippo, river horse, Hippopotamus amphibius confidence: 0.013812806
cougar, puma, catamount, mountain lion, painter, panther, Felis concolor confidence: 0.006618797

Generate Model and Upload

Now we upload our model in blob storage and get SAS url.

First, we define our entire graph using json file named “service_def.json” as follows.
Here each 3 models in pipeline are representing “Transform Input”, “Image Featurizer”, and “Image Classifier” in previous diagram, and these are executed in order. As you can see, the second one is representing the pre-defined resnet50 model on FPGA. (On the other hand, others will be executed on CPU.)

service_def.json

{
  "aml_runtime_version": "1.1",
  "pipeline": [
    {
      "model_path": "input_trans.pb",
      "input_tensor": "in_images:0",
      "output_tensor": "transferred_input:0",
      "type": "tensorflow"
    },
    {
      "model_ref": "resnet50",
      "model_version": "1.1.6-rc",
      "output_tensor_dims": [ 1, 1, 2048 ],
      "type": "brainwave"
    },
    {
      "model_path": "resnet50_classifier.pb",
      "input_tensor": "Input:0",
      "output_tensor": "resnet_v1_50/logits/Softmax:0",
      "type": "tensorflow"
    }
  ]
}

We create zip file with input_trans.pb, resnet50_classifier.pb, and service_def.json as follows. Later this archive (model.zip) is used for the model file in Azure ML model management.

zip model.zip input_trans.pb resnet50_classifier.pb service_def.json

Upload this archive (model.zip) into blob storage and get this blob’s SAS url.

Note : Here we uploaded models (.pb) which is not optimized, but I recommend you to build frozen models using tf.graph_util.convert_variables_to_constants() and tf.graph_util.remove_training_nodes() .

Deploy Model and Service with Model Management

Now we deploy our artifacts and create our service using Model Management api in Azure Machine Learning.
Here I show raw REST API for your understanding (what is done under the hood), but you can use Python SDK for Azure Machine Learning (wrapper functions) as you can see in the quickstart tutorial.

Note : From here, you must submit form to request for FPGA-enabled model management quota and get approved beforehand in current preview. (Otherwise the following steps will fail.)

Before starting, you must create the service principal and set RBAC’s permissions for enabling your service principal to access model management account. (Here I don’t describe these steps, but please see my early post “Use Azure REST API without interactive Login UI” for details.)
Now you can get access token for invoking model management api as follows. (Generated access token will be expired in one hour. Please refresh token again after one hour later.)
Here “b3ae1c15-4fef-4362-8c3a-5d804cdeb18d” should be your Azure subscription id, “5301bd12-…” is your service principal’s id (application id), and “M1Y2Ii7…” is your service principal’s secret (key).

HTTP Request

POST https://login.microsoftonline.com/{your_tenant}.onmicrosoft.com/oauth2/token
Accept: application/json
Content-Type: application/x-www-form-urlencoded

resource=https%3A%2F%2Fmanagement.core.windows.net%2F&client_id=5301bd12-...&client_secret=M1Y2Ii7...&grant_type=client_credentials

HTTP Response

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

{
  "token_type": "Bearer",
  "expires_in": "3600",
  "ext_expires_in": "0",
  "expires_on": "1526953681",
  "not_before": "1526949781",
  "resource": "https://management.core.windows.net/",
  "access_token": "eyJ0eXAiOi..."
}

Next you must call the following model management endpoint.
Here “myrg01” is your resource group, “mmacc01” is your model management account, and “eyJ0eXAiOi…” is the previouly retrieved access token. (You must specify access token for the following all requests.)

https://eastus2.modelmanagement.azureml.net/api/ in “modelManagementSwaggerLocation” (in HTTP response) is the API endpoint for us.

HTTP Request

GET https://management.azure.com/subscriptions/b3ae1c15-4fef-4362-8c3a-5d804cdeb18d/resourcegroups/myrg01/providers/Microsoft.MachineLearningModelManagement/accounts/mmacc01?api-version=2017-09-01-preview
Accept: application/json
Authorization: BEARER eyJ0eXAiOi...

HTTP Response

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

{
  "id": "/subscriptions/b3ae1c15-4fef-4362-8c3a-5d804cdeb18d/resourceGroups/myrg01/providers/Microsoft.MachineLearningModelManagement/accounts/mmacc01",
  "name": "mmacc01",
  "type": "Microsoft.MachineLearningModelManagement/accounts",
  "location": "eastus2",
  "tags": null,
  "sku": {
    "name": "S1",
    "capacity": 1
  },
  "properties": {
    "createdOn": "2018-05-09T01:55:02.6997402Z",
    "modifiedOn": "2018-05-09T01:55:02.6997402Z",
    "description": "",
    "modelManagementSwaggerLocation": "https://eastus2.modelmanagement.azureml.net/api/subscriptions/b3ae1c15-4fef-4362-8c3a-5d804cdeb18d/resourceGroups/myrg01/accounts/mmacc01/swagger.json?api-version=2017-09-01-preview"
  }
}

Now let’s register our previously generated model (model.zip) into our model management account as follows. The “url” in json body is our model file’s SAS url.
You must copy model’s id (here “ec0c03fbcd7745a6a82877102e681f52”) in HTTP response for the following service provisioning.

HTTP Request

POST https://eastus2.modelmanagement.azureml.net/api/subscriptions/b3ae1c15-4fef-4362-8c3a-5d804cdeb18d/resourceGroups/myrg01/accounts/mmacc01/models?api-version=2018-04-01-preview
Authorization: BEARER eyJ0eXAiOi...
Accept: application/json
Content-Type: application/json

{
  "name": "model.zip",
  "mimeType": "application/zip",
  "url": "https://storedemo01.blob.core.windows.net/models/model.zip?sp=r&st=2018-05-21T08:30:14Z&se=2019-05-20T16:30:14Z&spr=https&sv=2017-11-09&sig=QFRFLVAjZYJixzB%2FI3RtLqMPZezZvPmaC65pFXu6MDY%3D&sr=b",
  "unpack": false
}

HTTP Response

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

{
  "id": "ec0c03fbcd7745a6a82877102e681f52",
  "name": "model.zip",
  "version": 1,
  "tags": [],
  "url": "https://storedemo01.blob.core.windows.net/models/model.zip?sp=r&st=2018-05-21T08:30:14Z&se=2019-05-20T16:30:14Z&spr=https&sv=2017-11-09&sig=QFRFLVAjZYJixzB%2FI3RtLqMPZezZvPmaC65pFXu6MDY%3D&sr=b",
  "mimeType": "application/zip",
  "description": null,
  "createdAt": "2018-05-22T00:20:36.3237432+00:00",
  "unpack": false
}

Now let’s deploy (create and run) our FPGA-enabled service using previously registered model as follows.
You must set “FPGA” as “computeType”, and “ec0c03fbcd7745a6a82877102e681f52” should be previouly registered model’s id.

HTTP Request

POST https://eastus2.modelmanagement.azureml.net/api/subscriptions/b3ae1c15-4fef-4362-8c3a-5d804cdeb18d/resourceGroups/myrg01/accounts/mmacc01/services?api-version=2018-04-01-preview
Authorization: BEARER eyJ0eXAiOi...
Accept: application/json
Content-Type: application/json

{
  "computeType": "FPGA",
  "modelId": "ec0c03fbcd7745a6a82877102e681f52",
  "name": "test-service",
  "numReplicas": 1,
  "sslEnabled": false,
  "sslCertificate": "",
  "sslKey": ""
}

HTTP Response

HTTP/1.1 202 Accepted
Operation-Location: /api/subscriptions/b3ae1c15-4fef-4362-8c3a-5d804cdeb18d/resourceGroups/myrg01/accounts/mmacc01/operations/9699b3f5-2912-40b4-a6b9-67580f4fa6f5

The above HTTP response is saying “it’s not still deployed and we return the deployment operation’s id”. (Deployment runs in the background.)
Then you can check the operation status with above “Operation-Location” url as follows.

HTTP Request

GET https://eastus2.modelmanagement.azureml.net/api/subscriptions/b3ae1c15-4fef-4362-8c3a-5d804cdeb18d/resourceGroups/myrg01/accounts/mmacc01/operations/9699b3f5-2912-40b4-a6b9-67580f4fa6f5?api-version=2018-04-01-preview

Authorization: BEARER eyJ0eXAiOi...
Accept: application/json
Content-Type: application/json

HTTP Response

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

{
  "id": "9699b3f5-2912-40b4-a6b9-67580f4fa6f5",
  "operationType": "FPGAService",
  "state": "Succeeded",
  "createdTime": "2018-05-22T00:35:50.8392375Z",
  "endTime": "2018-05-22T00:36:37.2907804Z",
  "resourceLocation": "/api/subscriptions/b3ae1c15-4fef-4362-8c3a-5d804cdeb18d/resourceGroups/myrg01/accounts/mmacc01/services/2785a20e601f42d1bc4a8b162749d2c5"
}

After the service deployment is succeeded, you can get the service spec with above “resourceLocation” uri as follows.
Copy service’s ipAddress and port in HTTP response.

HTTP Request

GET https://eastus2.modelmanagement.azureml.net/api/subscriptions/b3ae1c15-4fef-4362-8c3a-5d804cdeb18d/resourceGroups/myrg01/accounts/mmacc01/services/2785a20e601f42d1bc4a8b162749d2c5?api-version=2018-04-01-preview

Authorization: BEARER eyJ0eXAiOi...
Accept: application/json
Content-Type: application/json

HTTP Response

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

{
  "modelId": "ec0c03fbcd7745a6a82877102e681f52",
  "numReplicas": 1,
  "ipAddress": "40.114.26.60",
  "port": 80,
  "sslEnabled": false,
  "sslCertificate": "",
  "sslKey": "",
  "keys": {
    "primaryKey": "X6f3yIVULB...",
    "secondaryKey": "OzO6wU7TJN..."
  },
  "id": "2785a20e601f42d1bc4a8b162749d2c5",
  "name": "test-service",
  "state": "Succeeded",
  "createdAt": "2018-05-22T00:35:50.8392696+00:00",
  "updatedAt": "2018-05-22T00:35:50.8392696+00:00",
  "computeEnvironmentType": "FPGA"
}

Consume Your FPGA-enabled Service !

Now you can use your FPGA-enabled service.
In current preview, the service endpoint is accessed by only gRPC protocol, and you must prepare the client stub code along with the gRPC server’s descriptions.
In this post we use the stub code in GitHub sample. (Please see source codes in aml-real-time-ai\pythonlib\amlrealtimeai\external\tensorflow_serving\apis .)

Before calling your service, please comment the following line in aml-real-time-ai\pythonlib\amlrealtimeai\client.py.

def score_tensor(self, data: bytes, shape: list, datatype, timeout: float = 10.0):
  request = predict_pb2.PredictRequest()
  request.inputs['images'].string_val.append(data)
  request.inputs['images'].dtype = datatype
  #request.inputs['images'].tensor_shape.dim.extend(self.make_dim_list(shape))
  return self.__predict(request, timeout)

Now let’s start to invoke our service !

from amlrealtimeai import PredictionClient # for sample classes in GitHub
import requests
import time

# call service !
client = PredictionClient(
  address = "40.114.26.60",
  port = 80)
image_file = 'tiger224x224.jpg'
start_time = time.process_time()
result = client.score_image(image_file)
stop_time = time.process_time()

# check result
classes_entries = requests.get("https://raw.githubusercontent.com/Lasagne/Recipes/master/examples/resnet50/imagenet_classes.txt").text.splitlines()
result = enumerate(result)
sorted_result = sorted(result, key=lambda x: x[1], reverse=True)

print('Top 5 Results:')
for top in sorted_result[:5]:
  print(classes_entries[top[0]], 'confidence:', top[1])
print('')
print('Performance: {:.2f} milliseconds'.format((stop_time-start_time)*1000))

Using my previous example in local machine, I got about 200 milliseconds for prediction per image without network latency with 1 NVIDIA Tesla K80 GPU. (The prediction for the first image might be very slow. Please run multiple times.)
How about our remote FPGA-enabled service ?

For reducing network latency as possible, we call our FPGA-enabled service from Virtual Machine’s client on the same Azure region (East US 2). The following is our result.

Note that this includes the invoking overheads and moreover our models (transforming image and classification) are not optimized. (Our mode is not frozen.) That is, this is not the exact performance for only FPGA inference.
It’s said in BUILD 2018 that FPGA archived less than 1.8 milliseconds per image with resnet50.

Note : Github example “aml-real-time-ai” is replacing all the variables in a graph with constants of the same values, before saving. This allows the removal of a lot of ops related to loading and saving the variables.
For the simplicity of our example code, we didn’t optimize our models at all.

After you’ve done, you can delete your service with the following HTTP request. (Here “2785a20e601f42d1bc4a8b162749d2c5” is your service’s id.)

DELETE https://eastus2.modelmanagement.azureml.net/api/subscriptions/b3ae1c15-4fef-4362-8c3a-5d804cdeb18d/resourceGroups/myrg01/accounts/mmacc01/services/2785a20e601f42d1bc4a8b162749d2c5?api-version=2018-04-01-preview
Authorization: BEARER eyJ0eXAiOi...
Accept: application/json

 

[Reference]

Build 2018 : Hyperscale hardware: ML at scale on top of Azure + FPGA

Build 2018 : What’s new with Azure Machine Learning

 

Categories: Uncategorized

Tagged as: ,

4 replies »

Leave a Reply