
上QQ阅读APP看书,第一时间看更新
Component interactions of the Object Detector
Let's understand the interactions between the various components of the Object Detector application:

From the user's perspective, the application loads a random image and displays the objects (or labels) that have been detected within that image. The demo workflow is as follows:
- The object detector application's web interface calls the Demo Object Detection endpoint to start the demo.
- The endpoint calls the Storage Service to get a list of files that are stored in a specified S3 bucket.
- After receiving the list of files, the endpoint randomly selects an image file for the demo.
- The endpoint then calls the Recognition Service to perform object detection on the selected image file.
- After receiving the object labels, the endpoint packages the results in JSON format.
- Finally, the web interface displays the randomly selected image and its detection results.