[+]
Getting Started
[+]
Recognition
[+]
Detection
[-]
Segmentation
  
 [+]
Segmentation Documentation
  
  
[+]
Resources
Updated on 9/11/2019
Brain Builder Knowledge Base
Segmentation Walk-Through
Direct link to topic in this publication:
  • Segmentation
  • »
  • Segmentation Walk-Through

In This Section:

Segmentation Annotation

Segmentation models use polygons with pixel-level precision to separate objects in the foreground of an image from the background. This technology enables product differentiation in photo-taking experiences, whether via an application or directly on a device. For example, this technology can be used to apply various focus modes to an image, such as sharpening the quality of a foreground object and blurring the background. However, training a model for these kinds of use cases typically requires a large amount of annotated data, which can make for an immensely time-consuming and costly process—until now.

Brain Builder's workspace enables faster and easier annotation with features such as brightness and contrast adjustment, brush tools, and the AI-Assisted Video Annotator, which learns tags from a single video frame and intelligently applies them to subsequent frames. The Video Annotator can drastically increase your output of annotated data—as we have seen this technology increase efficiency by 90 percent and reduce costs by 70 percent. For example, if you have ten seconds of video running at 30 frames per second, you can produce 300 annotated images in just a few minutes!

Image/Video Annotation

In this walk-through, we'll practice creating and labeling polygons, running the Video Annotator, and refining annotations in order to familiarize you with the workspace and some of the tools that can help accelerate this process. Instead of beginning with an image, we'll skip right ahead to a video because if you can tag a frame of the video, you can tag an image—as the process is the same.

Before we begin, follow this link to download the video to your computer that we'll be working with.

  1. Log in to Brain Builder and click on the Getting Started project.

  2. Click the Create new dataset button. The Create a Dataset window opens.

  3. Enter the Dataset Name as "Segmentation Walk-Through" and select the Segmentation bubble for the type of dataset. In the Assign Classes field, create a class called "Bee." Then, click Add Dataset. The Dataset page opens.

  4. Click the Upload Images/Videos button, and select Choose From Computer. Browse your computer for the video you downloaded earlier, select it and click Open. Then, click the Start Upload button. A progress bar shows you the status of your upload.

  5. Once the video is loaded into your dataset, click Resume Tagging to open the Workspace. You are now ready to begin creating and labeling polygons.

  6. You can click the Draw Tool menu at the top of the left toolbar and select the Pen Tool to manually point-and-click draw a polygon around the bee. Alternatively, you can use our AI-Assisted Snap-to-Fit technology to help you. Click the Snap to Fit Tool menu and select one of the options such as Snap to Anything or Snap to Animal. Then, use the vertical and horizontal guiding lines to draw a box tightly around the bee, and click Next to generate the polygon.

    You can see that Snap to Fit does a very nice job of precisely outlining the bee, except for one thing—it didn't include the bee's wing! But don't worry, we can easily fix that. Before we do that, select the Bee class label from the pop-up that appears.

    Click the Draw Tool menu and select the Addition Tool. We'll use this tool to add the bee's wing to the existing polygon.

  7. Once the first frame of the video has been tagged to your satisfaction—i.e. the polygon precisely outlines the bee—it's time to run the Video Annotator. Click the Annotator button in the bottom-right.

    In the pop-up that appears, adjust the endpoint of the video clip slider to about five seconds or so. (We recommend running the annotator on a smaller clip first to check the accuracy before doing the whole video.) You can leave the other default settings as they are and click Start. A progress bar shows you the status of your annotation, and a success message appears when it is finished.

    Click the Timeline button to view your annotated clip.

    You may notice the polygon does a pretty good job of staying around the bee as it moves, but there are a few spots where the accuracy could be improved. You can click through the annotated video clip frame-by-frame and use the workspace tools (such as the Addition and Subtraction Tools) to correct any details that need to be fixed. Any frame that has manual corrections will be exported regardless of what other tracks exist from running the Video Annotator. (For more information on exports, see Exporting Segmentation Data.)

    If you find there are numerous frames in a row that require corrections, it may be more efficient to re-run the Video Annotator on that portion of the video as a separate clip. For example, we can see in this video that when the bee lands on the flower and its shape changes in appearance, the annotator struggles to maintain accuracy. Click through to find a frame when the bee's shape first changes that has a missing or inaccurate annotation.

    Annotate the frame and run the Video Annotator on the rest of the video.

  8. Go through the video to correct any inaccurate frames and re-run the Video Annotator where necessary until you are satisfied with the results. Your annotation saves automatically when you leave the workspace.

After this walk-through, you should feel comfortable creating a dataset, uploading data, annotating images/videos, and running the Video Annotator. If at any point you need a refresher on these tasks, see the documentation for Uploading Data for Segmentation Annotation.

Next: Segmentation Documentation >>