Getting Started
Detection Documentation
Updated on 9/11/2019
Brain Builder Knowledge Base
Detection Walk-Through
Direct link to topic in this publication:
  • Detection
  • »
  • Detection Walk-Through

In This Section:

On This Page:

Detection Annotation

Detection models use bounding boxes to help provide important information about an object-of-interest's location. This can be useful in a wide variety of industries, such as for security camera applications, warehouse packaging applications, infrastructure inspection applications, and more. Training a model for these type of use cases typically requires a large amount of annotated data, which can be a time-consuming and costly process.

Brain Builder's workspace enables faster and easier annotation with features such as brightness and contrast adjustment, brush tools, and the AI-Assisted Video Annotator, which learns tags from a single video frame and intelligently applies them to subsequent frames. The Video Annotator can drastically increase your output of annotated data—for example, if you have ten seconds of video running at 30 frames per second, you can produce 300 annotated images in just a few minutes!

Image Annotation

In this walk-through, we'll practice drawing and labeling bounding boxes on just one image to get you acquainted with the workspace and process. Before we begin, follow this link to download the image to your computer that we will be working with.

  1. Log in to Brain Builder and click on the Getting Started project.

  2. Click the Create new dataset button. The Create a Dataset window opens.

  3. Enter the Dataset Name as "Detection Walk-Through" and select the Detection bubble for the type of dataset. In the Assign Classes field, create a class called "Bottle." Then, click Add Dataset. The Dataset page opens.

  4. Click the Upload Images/Videos button, and select Choose From Computer. Browse your computer for the image you downloaded earlier, select it and click Open. Then, click the Start Upload button. A progress bar shows you the status of your upload.

  5. Once the image is loaded into your dataset, click Resume Tagging to open the Workspace. You are now ready to begin drawing and labeling bounding boxes.

  6. Select the Bounding Box tool from the toolbar on the left (or by pressing B on your keyboard). Use the vertical and horizontal lines that appear to guide you in drawing a bounding box tightly around one of the bottles. Then, select the Bottle label from the Classes list on the right to assign that label to the bounding box.

    To save time labeling the other bottles, you can copy and paste the first bounding box and drag it onto the other bottles in that row. To copy, click CMD+C (mac) or CTRL+C (Windows) on your keyboard, or click on the menu at the top-left corner of Brain Builder.

    After you have copied and pasted your bounding box to all bottles in the top row, you can copy and paste the whole row of bottles and drag it onto the other rows.

  7. Once you have finished labeling all of the bottles in the image, click back to the Dataset page from the breadcrumbs link at the top. When prompted, click Save & Continue to save your bounding boxes.

Video Annotation

Now that you have successfully annotated your first image, it's time to give video annotation a try! Brain Builder's AI-Assisted Video Annotator features two different tagging technologies—InstaTag and PolyTag. This walk-through aims to help you understand the difference between the two technologies and the situations each one is best suited for. Follow this link and this link to download the two videos onto your computer that we will be working with.

  1. From the Detection Walk-Through Dataset page, upload the two videos you just downloaded by using the same process as before (see step 4 above).

  2. Once the two videos are loaded into your dataset, click the Manage Classes button. Create two new classes called "Panel" and "Person" by typing each name into the field and clicking Create.

    Click Close to get back to the Dataset page. Click on the telcom panels video (that shows a person on a piece of infrastructure) to open it in the workspace.

  3. Use the Pen Tool to draw precise polygons around the panels. This should be done in the same way as annotating an image with Segmentation. Despite starting with polygons, however, both InstaTag and PolyTag output bounding boxes.

  4. Once you have all the panels labeled, you can run the Video Annotator using both InstaTag and PolyTag to see how they compare in this situation. Click the Annotator button at the bottom-right.

    Notice that InstaTag is selected as the default. For the purpose of this exercise, you can leave the fields as they are and click Start. A progress bar shows you the status of your annotation, and a success message appears when it is finished.

    Then, click the Annotator button again. This time, select PolyTag and click Start. When the annotation is finished, click the Timeline button.

    You can see there are two tracks on the timeline—one for the PolyTag annotation and one for InstaTag. Click the button next to the InstaTag track to hide it for now, and click Play to watch the PolyTag track. Notice how the bounding boxes behave throughout the video as the panels move.

    Then, hide the PolyTag track and watch the InstaTag track. You may notice the InstaTag bounding boxes remain more steady and accurate throughout the video than the PolyTag bounding boxes did. That's because InstaTag works better on videos in which the objects move or overlap each other. It is similar to a tracking technology in that way, and it is also a faster technology.

    PolyTag, on the other hand, is more effective when videos change perspective or objects leave and enter the frame, as it is able to generalize better. Let's test the other video of the person walking on the street to see this in action.

  5. Navigate to the second video, and use the Pen Tool to draw a precise polygon around the prominent person in the center of the frame.

  6. Click the Annotator button at the bottom-right, and run the Video Annotator using both InstaTag and PolyTag as you did with the other video. When both annotations are finished, hide the PolyTag track and watch the InstaTag track first. You can see the bounding box get stuck when the person leaves the frame, and when a new person enters the frame, no bounding box appears around them.

    Now hide the InstaTag track and watch the PolyTag track. You can see that PolyTag works much better on this video, as the bounding box disappears when the person leaves the frame and a new bounding box appears when a person enters the frame.


After this walk-through, you should feel comfortable creating a dataset, uploading data, annotating images, and running the Video Annotator. If at any point you need a refresher on the difference between InstaTag and PolyTag, see the documentation for Bounding Box Annotation on Video.

Next: Detection Documentation >>