DeGirum® PySDK

Develop @ the Speed of Thought

DeGirum PySDK is a Python package for developing sophisticated AI applications with just a few lines of code. PySDK's intuitive APIs coupled with commonly used utilities make it simple to deal with images, camera feeds, audio streams, and video files. Multiple models can be used in parallel or in cascade to extract valuable insights from the input data.

Getting Started

Instruction to install PySDK and API documentation can be found at the DeGirum® Cloud Portal. Developers can install PySDK on their machine and run inferences on AI HW hosted in DeGirum Cloud Device Farm using API access tokens. Alternatively, developers can log on to DeGirum Labs which has a pre-configured Jupyter Lab environment to run PySDK. The credentials for DeGirum® Cloud Portal and DeGirum Labs are the same.

Same Code Everywhere

DeGirum PySDK is designed to enable developers to create AI applications in the cloud prior to setting up HW locally. Once the application is developed in the cloud, they can purchase the required HW and deploy their applications at the edge by changing only one line in the code. The APIs work with a model zoo object which is a collection of models served by an AI server. PySDK supports connecting to AI servers in different locations with a unified API. Currently, the following options are supported:

Option

AI Server Location

Use case

1

DeGirum Cloud Device Farm: accessed using DeGirum Cloud Application Server

Evaluation of DeGirum PySDK and HW

2

DeGirum Farm: accessed using Peer-to-Peer VPN

Rapid application development

3

Local Area Network: accessed using server-client protocol

Performance optimization, deployment

4

On-Device: accessed using server-client protocol

Performance optimization, deployment

5

On-Device: direct access

Performance optimization, deployment

Connect to Model Zoo

You can connect to a model zoo served by AI servers hosted on a variety of platforms: on the DeGirm Cloud Platform, on the system in DeGirum farm accessible via Peer-to-Peer VPN, on your local system accessible through LAN or directly - all with a unified API. Once you connect to a model zoo, the rest of the code remains the same.

                        
                            import degirum as dg, mytools, cv2
                            inference_option=1 
                            # 1: DeGirum Cloud Server, 2: Peer-to-Peer VPN, 3: LAN, 4: Localhost, 5: Direct access
                            zoo=mytools.connect_model_zoo(inference_option) # connect to the model zoo
                        
                        
                    

Run Inference

PySDK makes it easy to visualize inference results eliminating the need for the boilerplate code.

                            
                                model=zoo.load_model('mobilenet_v2_ssd_coco--300x300_quant_n2x_orca_1') # load model
                                data = 'path/to/my/image.jpg' # or image URL, or numpy array, or PIL image object
                                result=model(data) # run inference
                                display(result.image_overlay) # display inference result
                            
                            
                        

Cascade Multiple Models

Develop sophisticated applications with multiple AI models where the output of one model is used to generate the input for the second model.

                        
                            people_det_model = zoo.load_model("yolo_v5s_person_det--512x512_quant_n2x_orca_1")
                            pose_model = zoo.load_model("mobilenet_v1_posenet_coco_keypoints--353x481_quant_n2x_orca_1")
                            people = people_det_model('path/to/my/image.jpg') # run person detection inference
                            with mytools.Display("Poses") as display:
                                for person in people.results: # iterate over detection results
                                    person_box = people.image.crop(person['bbox']) # extract detected person image
                                    poses = pose_model(person_box) # run pose detection inference
                                    display.show(poses.image_overlay) # display pose detection result
                        
                        
                    

Get Maximum Performance Out of HW

PySDK automatically schedules multiple images so that the HW is used in the most optimal manner.

                        
                            directory_path='path/to/my/images/'
                            model=zoo.load_model('mobilenet_v2_ssd_coco--300x300_quant_n2x_orca_1')
                            with mytools.Display("Directory Show") as display:
                                for res in model.predict_dir(directory_path): # iterate over all images in directory
                                    display.show(res.image_overlay)
                        
                        
                    

Integrate Camera Feed

PySDK provides simple interface to handle ML inference on webcam feed as well as RTSP camera feeds. Just supply the camera ID or the URL of RTSP stream and watch PySDK do its magic.

                        
                            camera_id='rtsp://user:password@hostname'
                            model=zoo.load_model('mobilenet_v2_ssd_coco--300x300_quant_n2x_orca_1')
                            with mytools.Display("AI Camera") as display,\
                                mytools.open_video_stream(camera_id) as stream:

                                # run batch prediction on a video stream
                                for res in model.predict_batch(mytools.video_source(stream)):
                                    display.show(res.image_overlay)
                        
                        
                    

Work with Multiple Cameras

Want to integrate multiple cameras with each camera running a different model? No, problem.

                        
                            camera_id_1='rtsp://user1:password1@hostname1'
                            camera_id_2='rtsp://user2:password2@hostname2'
                            model_1=zoo.load_model('mobilenet_v2_ssd_coco--300x300_quant_n2x_orca_1')
                            model_2=zoo.load_model('yolo_v5s_coco--512x512_quant_n2x_orca_1')
                            with mytools.Display("AI Camera 1") as display_1,\
                                mytools.open_video_stream(camera_id_1) as stream_1, \
                                mytools.Display("AI Camera 2") as display_2,\
                                mytools.open_video_stream(camera_id_2) as stream_2, \:    
            
                                for res_1, res_2 in zip(model_1.predict_batch(mytools.video_source(stream_1),\
                                    model_2.predict_batch(mytools.video_source(stream_2)):
                                    display_1.show(res_1.image_overlay)
                                    display_2.show(res_2.image_overlay)
                        
                        
                    

Annotate Video Files

Quickly annotate video files and playback the videos with rich AI insights.

                        
                            from pathlib import Path
                            input_filepath='path/to/my/video.mp4'
                            orig_path = Path(input_filepath)
                            ann_path = orig_path.with_name(orig_path.stem + "_annotated" + orig_path.suffix)

                            stream = mytools.open_video_stream(input_filepath) # open video stream from file
                            w = stream.get(cv2.CAP_PROP_FRAME_WIDTH)
                            h = stream.get(cv2.CAP_PROP_FRAME_HEIGHT)
                            fps =  stream.get(cv2.CAP_PROP_FPS)

                            # create and open stream writer
                            writer = cv2.VideoWriter()
                            writer.open(str(ann_path), cv2.VideoWriter_fourcc('m', 'p', 'g', '4'), fps, (h,w))

                            with mytools.Display("AI Camera") as display, \
                                mytools.open_video_stream(input_filename) as stream:

                                for res in model.predict_batch(mytools.video_source(stream, report_error=False)):
                                    writer.write(res.image_overlay)

                            writer.release() 
                        
                        
                    

Scale Compute Capacity

Need more compute? Use multiple ORCAs to scale the performance without changing a single line of code! Just connect to AI Server with multiple ORCAs installed.

                        
                            # connect to AI server hosting multiple ORCAs
                            zoo=dg.connect_model_zoo('hostname-of-ai-server-with-many-orcas')

                            # the rest of the code is the same as before: it will scale automatically
                            directory_path='path/to/my/images/'
                            model=zoo.load_model('mobilenet_v2_ssd_coco--300x300_quant_n2x_orca_1')
                            with mytools.Display("Directory Show") as display:
                                for res in model.predict_dir(directory_path): # iterate over all images in directory
                                    display.show(res.image_overlay)
                        
                        
                    
Contact Us
Name*
E-mail*
Phone
Message