Build a Web-Based Tool to Identify Superheroes Using Azure Custom Vision

OVERVIEW
In a previous blogpost I used classification in Custom Vision to create a model that could identify superheroes a.k.a. vigilantes.
In this blogpost I will dive with you into how you can publish a model and consume it as an API through your apps!
Requirements
Demo
STEP 1
Walk-through my previous blogpost as I will pick up where I left off!
STEP 2
I will consider you have already created your first project, now open your project by clicking it.

Click “Performance” from the navigation bar.

Click “Publish” and choose a name for your publish.


Click “Prediction URL”

You will get all the information needed to consume your own model from your solutions at ease!

STEP 3
It’s time to write some code, but don’t worry, I will make it easy for you!
You first need to create a folder and call it anything, then create 3 files inside, 1 HTML and call it index.html, 1 CSS and call it style.css and 1 JS and call it script.js
Use the following code to populate your HTML file and build the front-end of your tool!
<!DOCTYPE html>
<html lang="en" >
<head>
<meta charset="UTF-8">
<title>who is the vigilante</title>
<link rel='stylesheet' href='https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/css/materialize.min.css'>
<link rel='stylesheet' href='https://fonts.googleapis.com/icon?family=Material+Icons'>
<link rel="stylesheet" href="./style.css">
</head>
<body>
<!-- partial:index.partial.html -->
<nav id="navigation">
<div class="nav-wrapper">
<a href="#" class="brand-logo center">who is the vigilante</a>
<ul id="nav-mobile" class="left hide-on-med-and-down">
</ul>
</div>
</nav>
<div class="container">
<div class="file-field input-field">
<div class="btn">
<span>file</span>
<input type="file" accept="image/*" id="capture" capture="camera">
</div>
<div class="file-path-wrapper">
<input class="file-path validate" type="text">
</div>
</div>
<div class="row">
<div class="col l4 xl4"></div>
<div class="picture col s12 l4 m12 xl4">
<div id="picturepreview">
<img id="predictedpicture" src="" alt="" />
</div>
</div>
<div class="col l4 xl4"></div>
</div>
<div class="row">
<div class="col l4 xl4">
</div>
<div class="button col s12 l4 m12 xl4">
<div class="predictbutton">
<button id="button" class="waves-effect waves-light btn-large center-align" type="submit" name="action">predict
<i class="material-icons right">send</i>
</button>
</div>
</div>
<div class="col l4 xl4">
</div>
</div>
<div class="row">
<div class="col l4 xl4"></div>
<div class="col col s12 l4 m12 xl4">
<table class="centered" id="myTable">
<thead>
<tr>
<th>name</th>
<th>accuracy</th>
</tr>
</thead>
<tbody>
</tbody>
</table>
</div>
<div class="col l4 xl4"></div>
</div>
</div>
<!-- partial -->
<script src='https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/js/materialize.min.js'></script>
<script src="./script.js"></script>
</body>
</html>
Use the following code to populate your CSS file to make your front-end look a little nicer.
#navigation {
background-color: teal;
}
#picturepreview {
text-align: center;
}
#predictedpicture {
width: 200px;
height: auto;
object-fit: cover;
}
.picture.col.m12 {
float: none;
}
.picture.col.l4 {
float: left;
}
.button.col.m12 {
float: none;
}
.button.col.l4 {
float: left;
}
.predictbutton {
text-align: center;
}
td {
text-align: center;
}
To make your tool function add the JavaScript below to your JS file.
window.addEventListener("load", function () {
document.getElementById('capture').onchange = function (evt) {
var tgt = evt.target || window.event.srcElement,
files = tgt.files;
if (FileReader && files && files.length) {
var fr = new FileReader();
fr.onload = function () {
document.getElementById('predictedpicture').src = fr.result;
}
fr.readAsDataURL(files[0]);
} else {}
}
button.addEventListener("click", function () {
const file = document.getElementById('capture').files[0];
console.log(file);
var URL = "PREDICTION_URL";
var xhr = new XMLHttpRequest();
xhr.open('POST', URL, true);
xhr.setRequestHeader('Prediction-Key', 'PREDICTION_KEY');
xhr.setRequestHeader('Content-Type', 'application/octet-stream')
xhr.send(file);
xhr.onreadystatechange = processRequest;
function processRequest(e) {
if (xhr.readyState == 4 && xhr.status == 200) {
console.log(typeof (xhr.responseText));
var json = JSON.parse(xhr.responseText);
console.log(json);
console.log(json.predictions[0]['probability']);
console.log(typeof (json));
var table = document.getElementById("myTable");
for (var i = json.predictions.length - 1; i >= 0; i--) {
var row = table.insertRow(1);
var cell1 = row.insertCell(0);
var cell2 = row.insertCell(1);
cell1.innerHTML = json.predictions[i]['tagName'];
cell2.innerHTML = json.predictions[i]['probability'] * 100 + '%';
}
}
}
}, false);
}, false);
Make sure to replace PREDICTION_URL and PREDICTION_KEY in the JS file with those you got at the end of STEP 3 under “If you have an image file” section.

Now let’s put our little tool to the test!

It works like a charm, but where do the images we use to predict go? And can we use them to re-train our model and make it more powerful?
Yes, all the images uploaded by the users for prediction are found under “Predictions” section in the portal so you can strengthen your model and make it more accurate!

Resources
Summary
I went from creating a classification model to going live with web-based tool in just a few minutes, and I can’t stop thinking about what else could be built using such a service.
Imagine how this could be used in fields like healthcare for example!
Thank you so much!
I came from your previous blogpost and applied everything on a project I am working on, please keep it coming 😀
Thank you, Ali.