What is URBAN-i?
URBAN-i sees the whole scene, from how A person rides a bike to what the weather was like and the conditions of the road surface to how the people around him/her were behaving. It then analyses the scene and compares it to past events to work out the most likely contributing factors. Consequently, it tells this person what the risks were and how to avoid them in the future.
Developing AI for public good
Building ecosystems and technologies for digital twins
Deploying AI at the edge for safer urban environment
URBAN-i is operationalised through AI-embedded cameras that generates synchronized data and provide solutions at the edge offline without cloud computations. When a person is riding a bike and has a near miss, URBAN-i sees the whole scene, from how he/she was cycling to what the weather was like and the conditions of the road surface to work out the most likely contributing factors. The core elements of URBAN-i are the computer vision algorithms and the sensor design.
For more details about our products, see below.
URBAN-i is also provided as a cloud-based service to transform user-defined images and videos streams into information online, without any coding experience, based on services fees.
When critical events occur in cities, rapid detection and response is of utmost importance. However, critical events often result from the interaction between different systems that are often inter-dependent and complex. When a traffic incident occurs, it could be because of the weather, built or natural environment, or road-users’ interactions. Only by taking a holistic view can we develop a system to detect and understand these events and their causes. Current sensors are task-based (i.e. CCTV, traffic cameras, pollution sensors, help points) that output data that need processing and integration by experts before they can provide information for decision-making. Our technology provides robust solutions based on deep learning and computer visiton. As it watches, URBAN-i learns how city systems interact to produce risk, proving actionable insights.
It comprises four sub-systems (vision, voice, environmental, communication, and data storage and encryption systems) that simultaneously function and interact with one another.
Our technology is built upon various research and development, in which some of our research are published in public domains in form of journal articles. For more details, see below.