ISADD is presented at the MCAST Research Expo

The ISADD project was presented at the MCAST research expo in two separate talks, based on two of the four main research pillars of the MCAST Research setup:

  1. Emerging Technology and Creative Innovation

Talk performed by Daren Scerri and Gerard Said Pullicino

 

 

2. Quality Pedagogy and Effective Learning

Talk performed by
Neville Schembri, Phyllis Farrugia Abanifi, Jonathan Vella, Dorianne Cachia

 

 

 

 

 

 

 

 

The hardware dimension

Isadd is now in the process of procurement of the hardware necessary to build the prototype. While the actual software prototype has been tested on a development platform, the actual prototype needs to be deployed on hardware that can offer better performance, both with respect to detection accuracy as well as camera capturing. It was noted in the initial testing that the quality of the webcam feed and the video capturing was an important element in the accuracy of the model identifying the various different elements of PPE.

We have therefore engaged in the procurement process, thanks to the funds kindly provided us by the MCST, to get better quality capturing equipment, and a more powerful graphics processor that promises to speed up the inference framerate of the computer that is currently being used. We have also procured a powerful laptop that can also be used for inferencing as a backup if network connectivity cannot be provided in the testing environment at the moment.

Once we have access to this equipment, some more performance testing will be carried out, and initial benchmarking of the solution can be carried out prior to more detailed testing.

Research updates

The architecture of the ISADD application is based on an AI server that identifies the elements of the personal protective equipment being donned. This means that the AI server does inference based on a video stream taken from a client device, identifies the objects, and returns text which explains what is being contained in the image.

The software required to perform inference is very specialised, and runs within a specific programming language which is quite processor intensive. The architecture for ISADD therefore has to be split into a client which can run in more ‘lightweight’ contexts, and an inference server that can contain the models required.

Today’s experiment validated the architecture for ISADD, whereby the webcam stream that was captured on the client device was served as a stream to be consumed by the inference server. The Artificial Intelligence server performed the inference tasks on the stream and returned a video stream with the annotations required to identify the elements of the PPE that were being worn.

Image on the left is the result of the inference. Image on the right is the webcam stream. Inference server offsite and images being streamed over the internet.

Development of an Integrated System for Donning and Doffing for Healthcare Professionals

Donning and doffing is the process that professional healthcare workers use to wear their protective gear in such a way as to minimize the risk of infection from contagious viruses.

The Donning and doffing process requires a series of carefully choreographed steps, which need to be followed to ensure that there is no risk of contagion for both the healthcare worker and his or her patients.

It is normally carried out as a series of steps, which may be hard to memorize, especially for nurses and doctors who are already overworked and under stress due to the pressure they may be under in the contexts they are working in.

The aim of this project is to make healthcare workers’ lives a little bit easier by creating a system that prompts them with the correct steps to follow for donning and doffing, and notifies them if any steps have been skipped.