signality.ai — superpowers for video

The world around us is full of video cameras that capture interactions and moments in every possible industry. They capture in 4K and store petabytes of data every day. Today, however, only a fraction of these hundreds of millions of video cameras can describe what they are looking at. They still fail at the simple task, for a human, of describing and classifying what’s happening in the video.

With Signality.ai we will add superpowers to any video stream or off the shelf camera so companies can understand, classify, and create valuable data from the video they capture.

A significant step forward
We are working hard to give superpowers to single angle video streams, a significant step forward from multi-camera setups or special computer vision cameras, and uses artificial intelligence and deep neural networks in the cloud to understand actions by individuals and groups, tracks people and objects. Our solution will recreate the whole physical space, solving the inverse problem, with all its geometry, dynamics, and environment.

This will means computer vision superpowers for everyone — great success!

Starting with automating expert tasks in sports
Making computers understand the world through video is a daunting task. Our roadmap to achieve that will focus on mastering one vertical at a time, solving real-world problems, and for each new vertical layer on more and more capability. This, in turn, makes every new vertical easier and easier to solve.

Our first vertical is sports and our initial focus will be to solve real-world problems for two target segments: 1) parents, fans, and teams wanting to create highlight reels but are stuck looking through videos to find the best actions and then doing video editing for hours to make it look good; 2) coaches that spend countless hours on manual data entry when breaking down video of opponents.

Shipping product is a priority
We believe in shipping product to the market as soon as possible and not sitting in a garage and building platform for years upon years. Our go-to-market approach gets our AI-platform out into the wild with the absolute minimal building blocks possible (identifying highlights), but will still deliver a major improvement over existing solutions on the market for users.

For parents, fans, and teams: Broadcast-level automatic highlight reels
Our coming iOS and Android app, BLAST, lets a user record a game, or the user can just upload an existing video file, and then our platform will automatically detect the best plays and create awesome a highlight reel with broadcasts-style graphics. Imagine driving home from the game and being able to show your kid the best action from the game he or she just played in, that has the same experience as watching the highlight on ESPN. No editing or additional software needed.

For coaches: No more manual data entry
Painstakingly, coaches need to create their own dataset from scratch every time they are preparing for a new opponent or self-evaluating. Imagine a 60 minute football game, with about 120–150 play sequences of 5–10 seconds. For each play sequence the coach needs to annotate, or tag, up to 100 actions, formation, and stats — a whopping 15,000 tags per game. Thinking about doing this in real-time to get actionable analytics to make in-game adjustments? No way. How about getting even more in-depth data such as the human eye can’t see – speed, acceleration, or angles? Forget about it. Signality.ai is changing all that.

More insights and more data
Our AI-powered platform will be able to interpret what is happening in the physical world with much more insight than a human using his or her eyes. We train our platform to perform the expert tasks that a coach does of analyzing and annotating what happens on the field/pitch/court in real-time. In addition to automating that video breakdown workflow, our platform also captures a richer dataset than human eyes can; player speed, acceleration, angles, and distances between players. That dataset is super-valuable for coaches, but rich sports data is also a treasure trove for companies.

Who wants lots of real-time sports data?
There is a large volume of companies who are in need of rich high-quality sports data: companies where real-time sports is important. For instance, real-time video content is the engine of all major social platforms and sports is a key driver: NFL/Twitter partnered around showing Thursday night football this fall and both Snapchat and Facebook are working to acquire live-streaming rights to sporting events. Using Signality.ai, these companies will get superior data volume in real-time.

Building blocks for the long-term
Sports, and football in particular, is the perfect confluence of several things that build out our platform’s capabilities: a constrained and known physical environment, a painful time-consuming manual task that coaches need to get done every week, access to massive amounts of structured video data to train our neural network, huge market demand for sports data, complex but structured movements by individual and groups of players. Solving this problem for football creates the building blocks to expand into other verticals that need complex human movement and interactions understood in real-time, without additional hardware, or as an augmentation to existing camera systems.

A battle-tested team
Mikael Rousson
, CTO, is a Ph.D in Computer Vision and has fifteen patents in the field. He was part of the team who built the iPhone’s face detection feature at Polar Rose, acquired by Apple. After that he was at Moodstocks working on object recognition, which was then acquired by Google. Recently, he has used Deep Learning to train deep neural networks to find mitosis in histopathological images and placed 2nd in a global competition among the likes of Microsoft and IBM.

Michael Höglund, CEO, has a background in leadership and strategy roles in product management, marketing, sales and business development for enterprises, startups, and as a consultant. He’s built several sports games for PC as Product Manager for powerchallenge.com that was backed by Balderton Capital / Benchmark Capital. He founded sports tech company xowizard.com. He advices several startups on product management, marketing, and business development.

The world is changing
Historically, any intelligence in video cameras has always come from an embedded architecture and today many camera manufacturers are moving to solutions with specialized chip architectures to increase that capability. 99% of all intelligence still runs on the hardware, or on the edge as they like to call it. There is movement in the space of more intelligent video cameras, but the activity is mostly driven by hardware and chip manufacturers.

We believe this will shift over the next couple of years and this will allow the intelligence to move into the cloud: higher bandwidth/5g, more powerful GPU’s, advances in deep neural networks, and camera chip architectures becoming increasingly commoditized. The advantages of having all your intelligence embedded will, over time, become less apparent.

The shift from hardware to cloud
The shift will be similar to what happened in the smartphone world where most of the intelligence is moving into the cloud and also one where owners of video systems will come to realize what new intelligent applications are possible by augmenting their existing solutions with deep neural networks. that learn from all other cameras, tapping into a network of cameras that become more and more accurate.

To us, it’s a given that a majority of the intelligence should be in the cloud; easier to update; enhanced accessibility for orders of magnitude more video cameras.

Retrofitting speeds up go-to market
Retrofitting existing cameras by fitting on top of existing video streams is our path into all existing video systems on the market. We want to bring this intelligence to existing video cameras as a first step.

Market size and potential use cases
The top-down approach would be pointing to a $4.7 BN sports analytics market and the $71.28 video surveillance market and call it a day. It’s hard to judge how big our addressable market is. We can envision many applications and use cases for intelligent video and we feel confident that it would attack a big slice of those pies. For instance, intelligent and autonomous video monitoring and surveillance to streamline operations and maintenance for geographic locations that are inaccessible due to terrain and harsh environments. We can make a case to solve interesting pains and problems in the following verticals:

•Security Surveillance
•Public Safety & Emergency Command
•Smart City
•Campus & Education
•Factory & Production
•Railway
•Construction
•Military & Security
•Oil & Gas
•Energy & Distribution
•Shopping Malls & Supermarkets
•Health
•Transport & Travel
•Commerce & Shopping
•Events & Arenas

The signal in an autonomous future
Some of those verticals may sound far-fetched, but as our society increasingly moves into autonomous mode; when we have autonomous large-scale factories producing goods, distribution centers shipping those goods with self-driving trucks to to an autonomous mega construction site that uses a fleet of smart construction robots, it will be only natural that video is the one of the key technologies in how we remotely work and interact with those sites.

Intelligent and actionable video monitoring and surveillance will be our eyes and signal in the petabytes of noise created every day.

Want to build an AI company with us?
Investors, seed funds, VC’s and angels — drop us a line.

Computer vision mavens or deep learning visionaries wanting a mission — ping us.

Leave a Comment

Your email address will not be published. Required fields are marked *