BRS Labs Is Gone; Giant Gray Is Here; Giant Gray is Gone; Omni Ai is Here

Author: Brian Karas, Published on Mar 28, 2016

BRS Labs is no more. After raising over $100M and years of outrageous claims, the company is effectively starting over.

In the report, we examine the reincarnated BRS Labs and what to expect under their new CEO, based on an interview with the new CEO.

Update 2017. Giant Gray is now gone, replaced by Omni Ai.

*** **** ** ** ****. ***** ******* **** $**** *** years ** ********** ******, *** ******* ** *********** ******** ****.

** *** ******, ** ******* *** ************ *** **** *** what ** ****** ***** ***** *** ***, ***** ** ** interview **** *** *** ***.

****** ****. ***** **** ** *** ****, ******** ** **** Ai.

[***************]

Steeped ** ***********

** *** **** ********* ** *** ****' ***** ****** **** the *********. ******** ******* ** **** **** ******* ***'*************************,************,*** *****,*******, *** ****.

** ******* *** ********** *** ********, ** *** **** ** now *******.

New ***

*** ***,***** ********** * ******** (*** *******) ******** ** *** *******.

** *** ********* ***** ** **** ** * ****** ** the ***** ****, **** ***** *********** ***** ***** ** *** company **** ****** ********* *** ************* *** ******** ** *** board **** **** *** *** *** **** *** ********* ** implement *** ***************.

******** *** * ******** ***** ******* ** ** *************** **** ****, *** ******** ***'* *********** ********* *** ***.******* ********* ******* ** ** ******** ** *** *********** ** **** the ***** ****.

** ***** **** ******** ******** ***** *** ***** *** *** new *******, *** ******** ******** **** ******** *** ***********.

New ****

*** ********** ** "*** ****" ** "*******" (*** ******** ******* name) *** **** ** ** *****. *** *** ******* **** is "***** ****" *** *** ******* **** ** "*********".

********* ** ******** *** **** ** * ********* ** "**** matter" *** *** ** "**** ** *** ****, *** ***** is *** ***** *** *****", **** ** ********** **** ***** Gray **** ** ******** ******* ** *** ** ********* ** their ********.

***** **** ******** * ******** *********:

“**’** *** ******** ** ****** ** ********, *************, *****-****** ******** in ********* **** * ****** ** *****-***** ************ ****** * number ** ******** *****,” **** ***** **** ********* *** ***, Steve *. ********, *** *********, “** ******* *** ****-********** ******** to *** ******** ****** *** **** ******** *** ****** *** excitement ** * *** ****, ** **** **, * *** strategic ********* ****** *** ******* ******* **** *** **** ****** of ******* **********.”

***** **** ********* ******** *** **** ** *********** *** ******* typical ** ***** ********, *** ******* ** *****: ******** ********** that *** *** **** **** *** ***** *** ****** *** tainted ** **** *** *** ******* * ********** **** ** success.

New *******

** ******** ** *** ******* ** ******* *****, *** *****+ *** *********** *** **** ** *****.

********* ** *** *** ******* *********/*** ****** ~$** ** ********** funding ** ***** ****.

Security ***** *****

*** ******* *** **** ********** *** ******** ***** **** **** experienced ******.

  • ***** ******, ******** *** *******. ****** *** **** **** * **** of *********** *** ******* ** ******** *****.
  • **** ****** * ****** **** ** ***** ****, ****** ********** **** at ***** / ******* *** *******.

***** *** **** **** **** ********* ********** ** ***** *********, an ********** **** *** **** ** ******* ** *** ******** channel *** *******.

New ******** ********

**** ** ***** ****'* ******** ******** *** ******* ** *** products. *** ******* ******* ** ******** ******* ******** *** ******** devices ** *** ****** ******, ******** **** *** **** ****.

New *******

******** **** ** ** *** ********** ** ********** *** ******** model ** **** *** ********. ** ****** *** *** ******* and *** ******** **** ** **** ** ******* ** ***** technical *****.

Application ******** *********

*** ***** **** *** *** ******* ** *** **** **** behind *** ********. ********* ** ***** ********* *** ***-*****-***** ******** intelligence ********. *** ******** ** ******** ** ******* * ****** of ******* ******, **** ** ******** ****** ******, *** ************* detect *** ****** ** *********.

*****

********* *** ******* ** ****, *** ******* ****** ** *********** company **** * **** ***** ****** *** ** *********** **** for * ******* *********. ***** ******** **** **** ***** ********** in ********* *****, ********* ***** ****** ********** ***** ** *** complex *******.

********* ***** ** ***** ***** ** ********* *** ***** ********* may ******** *** *** ******* ****** **** * ******* ***** in ***** ** **** *******. ******** ****** ********* **** ********* and ***** ****** ** *** ******** **** ******** *** ****** deployments.

Investor ************

***** ****’* *** ******** ******* ** *** **** ***** ****** existing *********. *******, ** *******, ***** *** **** ** ******* and *** *** **** *******, ******** ********* *** ****** ** be *******. **** ******** **** *** **** *** ****** ** turn ****** / ***** ***** **** ***** **** ******** ********* are ******** ** *** *** ****** *** **** ****, ** ever.

Comments (26)

A lepeord can't change its spots.

Technically, they are an elephant now...

I do not know if they can really change or not, but it is going to be a lot of work. The whole 'sell an add on analytic server for a ton of money' business model has never really caught on.

To that end, if they can overcome the past wildness, I am curious if they can reshape their offering to make it more appealing to the broader market.

The elephant's been in the room awhile, it's a smart move to call it out. ;)

BRS Labs approach had been that rules-based analytics were limited in their usefulness and applications, and that AI was a more logical approach.

My typical response when selling analytics was that I didn't have customers say to me "I don't know what my problem is, or how to define it." Most customers purchasing analytics in the broad market knew what they wanted alerts for, and it typically revolved around people or vehicles being in specific places outside of some business hours. Sometimes it was 24/7, such as at a cell tower site or a remote oil/gas wellhead.

It's generally straight forward to setup a system to alert on a person crossing a line or entering a secure zone. Much of the value of one product over another is in the details of the setup and maintenance of the system, but there are enough choices on the market today to have some viable options.

Where rules-based analytics are harder to get value from is in public places like shopping malls or airports. It's hard to define a "rule", because people are generally going to be everywhere. Going in every direction, loitering, and even leaving stuff behind. This is where an anomaly-based system can have value, because it is theoretically able to develop an understanding of what is "normal" in a given scene and alert on an abnormality.

A big challenge for an anomaly-based system is knowing what it will detect and when it will detect it. Also what happens if you have a problem that is frequent, like people walking along railroad tracks? When does "anomalistic" become "normal"? How does a customer test the system for functionality without also affecting its learning?

Certainly these are not unanswerable questions, but they do make customers think a little more about what they need or want the system to do. IMO, GiantGray needs to do a lot on the marketing front to make it very clear exactly what the system will do and how it does it.

The other problem, as John also mentioned, is that it's really hard to be a stand-alone software business in the analytics market. You need to sell a lot of stuff, or go after high-dollar systems (or, if you're lucky, both). Tied into this is the integrations to VMS's. Few customers like having multiple interfaces, so you need to either build your own very robust software interface, or spend a lot of time and effort dealing with integrating to many VMS platforms.

GiantGray incorporates data from multiple sources into their algorithms. I couldn't get a really good example of this beyond "data from SCADA systems", but I think the approach is to look at the video as just one component and factor that against things like temperature. EG a shopping mall might expect traffic volumes to go up or down based on weather like snowstorms, or heatwaves.

To me, the ideal system would have some rules and some anomaly detection. There are basic things you might want alarms on as instantaneous alerts: a person loitering outside a back door, wrong-way vehicles trying to sneak in the exists of a paid parking garage. On the flip side, some events can only be recognized as interesting in context of longer sample periods. Are employees at a business suddenly arriving early or staying later? Is there high activity in the far-end of the parking lot?

Today the video analytics market is still too fragmented. Some companies are focusing on rules, some on anomalies. This is like going to one company for indoor cameras, and another for outdoor cameras. Customers, in my experience, do not want a lot of "best in breed" systems tied together like this, too many integration issues or things that do not work intuitively across the platforms.

I think that for GiantGray to have a major shot at success they need to be able to solve as many of the analytics problems as possible, meaning they need to add some simple rules-based alerts to the anomaly detection code.

Informative.

...meaning they need to add some simple rules-based alerts to the anomaly detection code.

Is your understanding that there are essentially no parameters or "rules" of any kind when configuring a BRS anomalous system? I always assumed that they downplayed the rules based configuration to stand out in the market, but that they did at least some minimal definition in a traditional manner.

Is your understanding that there are essentially no parameters or "rules" of any kind when configuring a BRS anomalous system?

Yes. From what I've heard/seen it's designed to be all automatic.

I think that part of this is because they don't have a strong object classifier, they don't identify moving blobs as "person" or "vehicle", just as some kind of non-background object. This makes it harder to create rules because customers often want some amount of specificity around what triggers the rule.

A BRS employee once explained it to me as "we have poor vision, but a really big brain that can compensate". I believe they are analyzing a relatively low resolution stream, but extracting information from the stream that other systems would not concentrate on.

[Poster is from BRS Labs]

It's not that we have poor vision, it's that we are really good at contextualizing the video despite poor video quality or resolution. We can process up to HD quality video, but that requires more server resources. Typically we advise directing those resources to the minimum resolution needed because of what our "brain" affords us.

We don't take the conventional approach to video analytics, where much of the resources are devoted to distinguishing people from vehicles from other objects. We are gathering extensive identifiable information about each and every subject in view and in-effect comparing that information against all the prior subjects. For instance people walking down a sidewalk- we observe the typical array of people and other objects, bikes, strollers, etc. We observe the behavior patterns, rate of travel, trajectories, groups, individuals, interactions, etc. We learn normal behavior and alert on statistically unusual behavior- for instance if a group stopped or an individual stopped and suddenly started shaking or climbing the fence next to the sidewalk- we would alert; or if a paratrooper suddenly landed inside that perimeter, or a cloud of smoke covered the area, or someone started digging a hole on the outside of the perimeter, or popped up through a manhole. As opposed to a rules based system that would typically alert once a virtual line or area was crossed.

The real world is unpredictable and there is no way to create enough rules to detect such a wide array of unwanted behaviors in any environment. In addition to the unlimited capabilities of alerting on highly unusual behaviors, our other advantage is that we don't have to program or set any rules to accomplish this, so if there were 1,000 or 10,000 such cameras, we can scale infinitely. You guys did make some good points about combining rules and reason. I will comment on that soon.

Hopefully this is not viewed as "promotion." Just seeking to clarify and temper the information that is here with some information directly from the source.

We are gathering extensive identifiable information about each and every subject in view and in-effect comparing that information against all the prior subjects.

Hobby, how is Giant Gray determining what is really risky? I can imagine a computer system picking out abnormalities but what percentage of those abnormalities are really a security or operational risk? In other words, Giant Gray might find 100 abnormalities per week per camera on an intersection but maybe just a few are worth paying attention. However, the operator has to look at all 100 to get the few that are possibly relevant.

Related, how much time does it take for Giant Gray to learn a scene? In the past, a few weeks was typically cited.

Excellent questions.

1) Q

----------------------

"Hobby, how is Giant Gray determining what is really risky? I can imagine a computer system picking out abnormalities but what percentage of those abnormalities are really a security or operational risk? In other words, Giant Gray might find 100 abnormalities per week per camera on an intersection but maybe just a few are worth paying attention. However, the operator has to look at all 100 to get the few that are possibly relevant."

-----------------------

1) A

John, this is probably the most important question and answer for a prospective customer to gain a realistic understanding and expectation. That is the million dollar question so-to-speak.

You are absolutely right.

I set expectations low, which align with your hypothetical math. I prepare my customers to expect that 5% of the alerts are valuable or actionable. We have seen the ratio much, much higher, even 50%-90% or more, but in complex environments like major metropolitan public transit or public safety- areas where we have vast installations and tons of data to support this: I consistently see 5% or 5 per 100 alerts as being valuable/actionable out-of-the-box (and that is important to note). There is no such thing as a literal false positive with Graydient- all alerts are statistical visual anomalies. But you are spot on to recognize that to a customer it doesn't matter if there is a scientific or mathematical explanation. If the alert is not valuable- it is a false alarm to them.

The reason we have been successful transacting and installing huge analytics systems is that we confronted all of this information up-front and because of our data visualization and reporting tools -that we can and do leverage, we can tell our customers precisely how many alerts per camera per day to expect. The majority of our early adopter enterprise customers ran pilots to collect several months of alerts and data. They knew exactly what they were getting. And several have gone from a few cameras to hundreds or 1,000+ based on their own experience and the shared experience of their industry peers. The 5% number as a starting point may not sound too flattering, but if we build SOPs up from there, we will meet or exceed those goals every time.

First off- one alert on a criminal act or any act that threatens life or safety or property or operations or the public perception of any enterprise can hold unquantifiable value. If it takes reviewing or sanitizing through nineteen 5 second video clips of sand to reach one of gold and a human life or millions of dollars of damages could be saved or averted- in my mind it is worth the minimal effort. Most un-actionable can be manually filtered in a few seconds. 5 seconds to play the clip or even a split second to view a marked up still image of the subject in the scene. For instance a clip of a subject on railroad tracks: if it is a maintenance worker as opposed to a passenger or bad actor you can determine that immediately because of his vest and hard hat. That takes a second or two. Conservatively we can factor 5-30 seconds depending upon the customer SOP for each alert...it is literally as quick as look-and-click.

Our customers typically fit a profile where they are willing to invest money, time and resources to have the best chance to make their environment safer and smarter. Graydient is matchless in its ability to scale and to deliver detection on the most unpredictable attacks or events that might occur. And because it is a robust server based platform the math we bring to bear is incredibly valuable for both real time alerting and rapid forensic retrieval as well as reporting to discern hot spots, patterns, trends and to chart and understand alert volumes, as well as system health. It also doesn't hurt that we can track exponentially more simultaneously moving objects than any analytics using DSP chips. You are really comparing a Hemi engine to a lawn mower motor (or an electric toothbrush in some cases ;) when you look at what Graydient can do in a complex and busy environment as opposed to edge based or rules based systems.

So first- I feel that for customer's with human life, public safety, critical infrastructure or precious assets and business flow- it is worth the effort to conduct the hygiene to discard 95 to get to 5 potentially actionable alerts.

Once we establish that, if they have hundreds of cameras or complex environments or a lot to lose from taking short cuts- we often objectively uniquely meet their requirements. We certainly can offer detection of behaviors or events that would otherwise be impossible to detect with rules based systems...and that is a fact.

From that point forward the name of the game is optimization, minimal optimization to be exact. I mentioned out-of-the-box. That 5% marker is my observation of the most challenging environments out-of-the-box. Even that is often unattainable for long term persistent surveillance using rules based systems. Think of this. The rule is the most accurate it is ever going to be at the exact moment in time that the rule is created. From that point forward everything begins to change. An example is if you have a tripwire on a fence line which borders a sidewalk and the camera is several hundred feet from the end of the FOV, if a storm shakes the camera pole and it settles a couple inches to the left, and the sidewalk is to the left of the fence line, the tripwire could move several feet to the left in the actual FOV...then the next morning there could be thousands of false alarms. This is just one example and I know that there are lesser evils and varying applications within different rules based systems to possibly deal with this kind of thing, but I cannot tell you how many systems I've taken over and been called in to save them from redonkulously high alert volumes...hundreds even thousands per camera per day...even to the point of crashing servers and systems. We can uniquely predictably forecast the alert volumes over time and normalize to fit the customer's bandwidth to review and respond to alerts. This is a huge advantage when helping customers scope their monitoring resources.

In reality Graydient does not totally eliminate manual interaction with the system as compared to rules based analytics. It does eliminate 80-90% in even the most complex areas and more like 99% over time. I conducted a friendly interrogation of a competitor's system recently and while the executive indicated that the software was less than half of our cost, they recommend 4 hours per quarter for tuning and testing. At $250 an hour that is an expensive proposition! The comparative ROI in our favor is staggering.

The 10-20% of the comparable effort on our part (at installation and is not repeated) is key to improving performance over out-of-the-box quality alert ratios. To John's point all customers are not interested in all anomaly types. And to the reason-plus-rules hybrid approach- you guys are right. We need rules to get the most optimum performance for the customer. But it's not rules a la conventional video analytics. We do need rules to apply to Graydient, to the bounds of the AI to focus on targeted output for the customer. We do have optimization tools and techniques to accomplish this. And are currently developing and trying more. For instance Alert Directives. Every alert generated has two buttons which you may click- always or never. The user can manually select that the specific behavior type or alert type will always or never alert regardless of the statistics. That is how for instance we deal with a system seeing maintenance workers appear hundreds of times on the track and assuring that the system does not learn that behavior as being normal, thus assuring that if a passenger or pedestrian ends up on the tracks- we can alert.

We can also focus on specific regions or have heightened alert criteria in a specific region, and a discrete discreet alert criteria for another. Another optimization technique is that if a busy subway station gets 10 alerts per week on the slowest moving pedestrians moving through the scene and they really only care if people are running or stopping/loitering where they normally wouldn't we would simply turn off the slow moving alert types. If they had 1,000 similar camera views- that change can be deployed system wide to hypothetically eliminate what would be viewed by the customer as 10,000 nuisance alerts per week. These are just a few best practices that we have learned on the job over my almost 4 years with the technology now.

I see the world through my customers' eyes and if I have an idea for anything we can do to make life easier and better for my customer- we have to do it. The good news is that our new CEO sees things the same way. He insists upon transparency with the customer and partners and he is an open book on doing anything we can do to better the product or better the process for our customer's benefit.

We have a zero tolerance policy at Giant Gray for misleading a customer or partner. This suits my approach anyway. I usually spend the first conversation with a customer giving them all the reasons they might consider doing business with another vendor. Video analytics has essentially become freeware or middleware. If I have a quiet perimeter and only need perimeter intrusion, why would I pay for Graydient if I could buy a camera with an onboard tripwire or the like, included? Good question. If you can do it for free...do it. Per above if their operations are mission critical or public safety or a very busy environment or a large volume of cameras, or the costs of supporting the free system outweigh the costs of making an enterprise investment...then after I try to talk them out of it, they will have to insist on doing business with me. At that point I go to work for them and deliver the best possible expertise and support to accomplish their mission.

Our new Graydient 5.0 platform has been in the works for some time and our new leadership has frankly accomplished more in the last 6 months than the company had accomplished in the previous two or three years. Prior to rejoining (after my two year hiatus), or rather prior to signing up to build a new company around the technology, which is exactly what Giant Gray is doing- I had a heart to heart with the new CEO to make sure that we were focused on the security and video business and that we would have the resources needed to win and make each and every customer a reference account. I also called my old customers to see what was going on with the systems, what they were happy with, what needed work and what we could change. I was pleasantly surprised by most of my discovery. I did uncover one major account that was experiencing dubious results. I took the CEO with me to visit that account. Since that time we have applied some basic optimization and the turnaround is astounding.

Our new CEO is allowing us to speak directly into the product roadmap on our customers' behalf. Heck he is taking input directly from some of our major customers. He is in plain sight building a healthy and sane business around this amazing technology that...let's just face it...could stand a redo in light of the founders believing they could make a technology exit as opposed to building a business. New leadership understands that we are absolutely building a business and we are in it for the long haul. Completely different mindset. Our predecessors thought they had lightning in a bottle, and did, but that alone is not going to be monetized in the security industry.

We are hiring rock stars and the character, tone and tenor of the company is brand new. Steve Sulgrove is all about the customer. He is giving us the resources to make the optimization process smarter not harder and we have a do what it takes mentality. With the open roadmap being customer focused as opposed to acquisition narrative focused, I am chomping at the bit to do my best work here and now at Giant Gray.

I have written A LOT. I am happy to try and answer all your questions but I must take a break. More later....

"If it takes reviewing or sanitizing through nineteen 5 second video clips of sand to reach one of gold and a human life or millions of dollars of damages could be saved or averted- in my mind it is worth the minimal effort."

And how often are you saving human lives or eliminating millions of dollars of damage? Once a day? Once a year?

"The reason we have been successful transacting and installing huge analytics systems"

What successes? What the market has seen is your company spend $100+ million, resulting in burning down the old company identity.

Who are these success stories?

"the character, tone and tenor of the company is brand new."

Ok, let's test that. Let Brian speak to 3 of your end users so he can ask them details about how the systems are performing.

John, we have alerted on assaults, fights, fires, floods, man-down, vandalism, break-ins, numerous daredevil type occurrences like a kid in a compact car following a train into a tunnel, jumpers, climbers, small children entering dangerous areas, homeless people living under rail platforms, end of platform, right of way, and other dangerous areas unbeknownst to the customers before our alerts, lurkers and loiterers at ATM's, unusual interactions of a physical or romantic nature in unusual places like hallways or stairwells, pedestrian tunnel intrusion, individuals lurking, stalking, loitering unusually outside of perimeters of critical infrastructure perimeters, reckless and wrong way vehicle traffic, car accidents, cars stopping in unusual locations like in the street for no reason or on sidewalks, fare jumpers, a subject wielding an axe in a public place, people appearing on a rooftop (which never occurred previously) across the street from a national political convention...among countless others that I myself have been privy too-mostly from my direct customers. I have permission to show a few of these and will happily do so in Vegas if you'd like to meet up. We can't really claim that we saved a life with certainty- unfortunately you only know for sure if the loss or damages are realized. I can say that these alerts described and many others are real-world, real-time alerts and that our customers have used them to improve safety and security.

These type alerts invariably happen, and while we have data on every alert and actually data on all behaviors whether anomalous or not- we can and do present the math to reliably predict alert volumes, and the heat mapping and hot spot clusters may, I repeat may be applied to predictive analytics for knowing when traffic and certain behavior types trend or cluster, allowing resource allocation such as additional patrols or stepping up monitoring resources for predictable busy times and locations- at the end of the day, we can't really predict when these things will happen...but generally speaking, we are the only technology on the market that could even hope to alert on these type of unusual behaviors.

As for testing our mettle- I offered to take Brian to a customer site yesterday. I am thinking of a major account where there have been lessons learned- and applied. It may take a few weeks to set this up. I have several new prospective customers requesting the same thing and I cannot inundate my customers, but I am confident I can make it happen. I have several customers who will happily talk to him by phone and that could be much quicker to arrange. Not sure if we have the bandwidth to do it before Vegas or not, but we can and will make that happen. We have ports, major cities, transit authorities, critical infrastructure, petrochem, banks, etc. who are long term customers some spanning several years, who have systems installed and in use and are adding and expanding. We have some in varying stages of construction and installation- but even those have hundreds to 1,000+ of mature cameras and are adding hundreds more, hopefully thousands!

We do have a couple that have stalled due to a variety of reasons- most of which are customer or contracting related and are out of our control. I am aware of a couple of customers that have struggled to squeeze the juice out of the system- but my accounts are all humming or are in process. We will be visiting all accounts offering free software upgrades to our new platform and under new leadership there are no obstacles to our new and improved best practices specific to video analytics, which make the most sense for customer focused performance.

Keep in mind that some of this is a work in progress, but it is a new day. We are an open book and happy to now cooperate with IPVM- assuming we get a fair shake. More technical updates to follow...

Hobby, does GG have a pre-built understanding of periodic environmental changes and the transformations that spring from them?

To wit, does GG know about snow before the first winter arrives?

Does GG know its ok if people are suddenly carrying long pointed poles in anticipation of rain?

I think neural networks and self-cognition are powerful ways of learning, but since you may have to wait years for certain non-threatening anomalies (e.g. eclipses) before the system can learn, can it really be effective within months of deployment?

If valuable alerts and nuisance alerts are the players- it's not a zero sum game.

First, customers must understand and accept that there will be nuisance alerts. Particularly with video as the data source. Video is the most inconsistent data source with countless naturally occurring variables let alone noise, hum, ghosts, degradation of signal, intermittent frame loss, attenuation, occlusion, blooming, jitter, sway, video snow, actual snow, ad infinitum!

The key is to properly set expectations, optimize targeted performance consistent with customer priorities and to refine operator SOPs for maximum efficiency and effectiveness.

In my experience, over a decade in analytics- the cognitive approach is vastly superior to rules based analytics.

You can absolutely harness significant value predictably from the system within a few weeks. In answer to John's earlier question about learning, the out-of-the box learning is set to two weeks. Some environments may be quicker or take longer. Busier environments that have relatively consistent behavior patterns like city streets with common traffic and pedestrian flow or train platforms may actually reach publishable alert quality sooner than a quiet environment where not much happens. But then again if not much happens, there would be little to no risk in excessive volumes either.

The determination of when enough learning has occurred before users begin to monitor the alerts is entirely subjective- dependent primarily on the volume of daily alerts across the enterprise that the customer is prepared to monitor. We do have tools to normalize the alert volumes on a per camera, group or enterprise basis. In effect the goal is to alert on the stuff that matters within a manageable volume. Because every alert holds reportable data to understand that alert in historical statistical context- we can use that information to effectively set the trap to accomplish that goal and predict the alert volume. As I mentioned in an earlier post, baptism by fire has taught us some easily applied techniques to better fill the bucket of alerts with more high quality alerts and ignore or filter visual anomaly types that are of no interest to the customer. These techniques may also apply to weather phenomena but are rarely required.

As for your questions on periodic weather changes, associated accumulation, snow as an example and benign anomalies such as eclipses- here goes.

Rather than needing to build canned environmental recognition algorithms and attempt to apply them to each and every different field of view, the very nature of Graydient is to observe gradual change specific to the characteristics of that view and adapt to it. This is vastly superior and more reliable than applying a prefabricated model, because our model is built entirely by the actual relevant visual data in the unique field of view. This is the most intelligent possible way to perform analytics, by only relying on the actual data, not canned images or algorithms in hopes that the actual data is similar. Cyclical environmental change like the sun's position in the sky (or rather our position from the sun ;) and shadows lengthening and shortening seasonally typically provides fits for most rules based systems. Graydient easily adapts to this change, factoring it into the model and rarely experiencing weather based issues. Keep in mind that weather can pose issues to all systems. The difference between GG and others is they have to battle the same issues over and over again and if we are challenged by visual and environmental anomalies, we learn from them resulting in improved performance in the future! (into perpetuity, getting smarter and smarter!)

As for snow or rain, unless there is a very heavy snow or rain, think white out or horizontal rain...we effectively see right through it. The only preset model if you will is the minimum and maximum tracker "box" sizes- which can also be changed. 99% of precipitation should never be capable of alerting because it would not constitute large enough pixel change to ever alert. If there are white out conditions due to weather then consider this- you cannot see anything in the video anyway. We would in effect either alert because of sudden scene change or provide a system alert (which would likely be the case with any analytics system), but these alerts can easily be prioritized differently than behavior alerts and essentially be passively monitored as opposed to actively. And in man vs. nature, sometimes nature wins. This is part of the customer's learning curve which we expertly guide them through and these laws of nature and laws of physics apply to all analytics, certainly not just us. And in reality we invariably have the most intelligence in the software to vet these inconveniences.

As for an eclipse, I doubt that would even present an opportunity to generate an alert if there is any artificial light source.

I have seen environmental anomalies alert, some unwanted and some wanted- like fires and floods. It is pretty spectacular when your analytics system is the first to inform you that there is a fire on your property, or surging water approaching your floodgate. True stories.

I have seen some shadow issues in early systems, some unusual alerts on the first major snow accumulation, and some light anomalies from reflections appearing as transient objects, as well as animals and vermin early on. Again each of those scenarios present teachable moments for Graydient and frankly all video analytics has challenges. We absolutely need to discuss those challenges before moving forward. But I know that I know that we can deliver value with our patented approach that no one can mimic. And if you have need for us to mimic rules based, without creating rules per se, but focusing our analysis -we generally can and can do it more robustly than can otherwise be accomplished.

I haven't even begun to discuss what's new in 5.0. More to follow...

And thanks for the questions.

Thanks for the answers.

What you are saying leads me to think that the system is always learning more, but leaves me wondering is it ever understanding more, epistemically speaking? Meaning that does it ever learn that a fire is a fire or is a fire just another anomaly which triggers an alert?

If it is the latter, is it possible to even know how the system is likely to perform? Is there no way of teaching it except for "the school of hard knocks" and the "always/never" overrides?

Also, when something goes awry, can it explain itself?

Switching gears, it's good to hear that you are back focused on physical security, but there's one thing I haven't seen mentioned yet:

Wasn't a large share of the BRS IP sold to Avigilon for several million not too long ago?

Were you granted rights in perpetuity for the IP that you sold? Or was the IP not valuable (to you at least). Why or why not?

Thanks!

OMG, I just wrote a comprehensive response then googled epistemically to make sure I understood it completely and forgot to use another tab and lost my whole response! It's late and I have calls in the morning and need to get ready to hit the road...business in the big apple Wed. I will provide a thorough response soon.

Quick answer is we can sell legacy software. Our business is not inhibited at all. Our new release 5.0 is all new IP that belongs exclusively to GG. And it will take more detail to answer the rest...

Hobby, have you any comment on this patent #9,292,743 Background Modeling for Fixed, Mobile, And Step-And-Stare Video Camera issued to Pure-Tech systems, described thusly:

Having a keen understanding of the changing background, allows for more accurate target detection, and more importantly, a reduction in false alarms. One significant claim of the new patent is that no assumption about the nature of the background is required. The software and/or installer does not need to concern themselves with the type of scene for which the algorithm will be utilized.

More specifically, the pixel-based multimodal background model makes no assumption about the number of modes or functional forms of the underlying distributions while exploiting radiometric information along with gradient magnitude and orientation through one, or a host, of such models to adapt to backgrounds that change with multitude of periodicities over the same or different regions of the scene while remaining geospatial-aware.

The "no assumption" aspect struck me as similar to your credo.

BRS Salesman Hobby:

"we have been successful transacting and installing huge analytics systems"

BRS Lab's CEO's statement:

"In the past, there were a lot of great ideas, but nothing delivered a finished product to the market."

Can anyone explain what will be with our investment in BRSlabs?

2, from our original post:

Giant Grey’s CEO declined comment on how this would impact existing investors. However, we believe, given the lack of success and the new fund raising, previous investors are likely to be diluted. This combined with the time now needed to turn around / build Giant Grey means that previous investors are unlikely to see any return any time soon, if ever.

So, net/net, don't get your hopes up.

As an investor, I would contact BRS Investor Relations formally and request information on current financial / operating conditions.

John

thank you,a lot for your response

Borut

Giant Grey interview at ISC West - seems pretty high level overall, no real look at the software / UI itself except for an intersection clip at the end:

It looks like Giant Gray has brought in another ~$5.8M in funding, according to new SEC Filings.

We asked Giant Gray for comment on this, but have not received any response.

Update: Giant Gray has become Omni AI, following the lawsuit against BRS Labs founder, local Houston newspaper has a detailed article on it: Tech company's backers shift assets following lawsuit.

Their brutal ride continues.

Their funding might be better spent developing an AI to detect unusual activity inside their company.

Sure but it would go off non-stop....

That's just proof that the false alert rate is really low.

Login to read this IPVM report.
Why do I need to log in?
IPVM conducts unique testing and research funded by member's payments enabling us to offer the most independent, accurate and in-depth information.

Related Reports

Rare Video Surveillance Fundraising - Verkada $15 Million on Apr 19, 2018
Fundraising in video surveillance (and the broader physical security market) has been poor recently. Highlights are few and far in between...
Worst Access Control 2018 on Apr 18, 2018
Three access control providers stood out as providing the most problems for integrators. In this report, we analyze the answers to: "In the...
Best and Worst ISC West 2018 on Apr 16, 2018
ISC West 2018 had strong attendance, modest overall new products, and a surge in Artificial Intelligence marketing. First, here are 20+...
Eocortex / Macroscop VMS Company Profile on Apr 09, 2018
Eocortex is the international brand of Russian VMS manufacturer Macroscop. Macroscop was founded in 2008, and the Eocortex name created in 2013. We...
VMS New Developments Spring 2018 (Avigilon, Exacqvision, Genetec, Hikvision, Milestone, Network Optix) on Apr 04, 2018
What's new with VMS software? In this report, we examine new features and releases for Spring 2018 to track different areas of potential...
Favorite Video Analytic Manufacturers 2018 on Apr 02, 2018
Video analytics is one of the biggest trends in 2018, driven by marketing for Artificial Intelligence and Deep Learning. But what do integrators...
Destructive Video Analytics Hype Returns on Mar 27, 2018
It is not just Hikvision's false advertising campaign. With marketing money being pumped into deep learning, we are returning to the bad old...
Startup Image Intelligence 10X Accuracy Guarantee on Mar 26, 2018
An Australian startup, Image Intelligence, is targeting the biggest problem of video analytics, poor accuracy, with quite a claim: Is this...
Top 4 Biggest Problems Selling Access Control 2018 on Mar 23, 2018
We received 150+ responses from integrators when we asked them "What is the biggest problem you face when selling electronic access control?...
Security Robot Sales Struggle on Mar 21, 2018
2 year ago, PSA Security CEO Bill Bozeman called security robots one of “the biggest game changers” in decades for security integrators. Just over...

Most Recent Industry Reports

May 2018 Camera Course on Apr 20, 2018
Save $50 on early registration until this Thursday, the 26th. Register now (save $50) for the Spring 2018 Camera Course This is the only...
Global Real-Time Video Surveillance - EarthNow on Apr 20, 2018
A new company, EarthNow, with backing from Bill Gates, Airbus and more, is claiming that: Users will be able to see places on Earth with a delay...
Dedicated Vs Converged Access Control Networks (Statistics) on Apr 20, 2018
Running one's access control system on a converged network, with one's computers and phones, can save money. On the other hand, hand, doing so can...
April 2018 IP Networking Course on Apr 19, 2018
This is the last chance to register for our IP Networking course. Register now. NEW - 2 sessions per class, 'day' and 'night' to give you double...
Rare Video Surveillance Fundraising - Verkada $15 Million on Apr 19, 2018
Fundraising in video surveillance (and the broader physical security market) has been poor recently. Highlights are few and far in between...
'Best In Show' Fails on Apr 19, 2018
ISC West's "Best In Show" has failed. For more than a decade, it has become increasingly irrelevant as the selections exhibit a cartoon level...
Security Camera Cleaning Frequency Statistics on Apr 18, 2018
150+ integrators told IPVM how often they clean cameras on customer's sites and why.  Inside we examine their answers and break down feedback...
Worst Access Control 2018 on Apr 18, 2018
Three access control providers stood out as providing the most problems for integrators. In this report, we analyze the answers to: "In the...
Axis VMD4 Analytics Tested on Apr 17, 2018
Axis is now on its 4th generation of video motion detection (VMD), which Axis calls "a free video analytics application." In this generation, Axis...
Arecont CEO And President Resign on Apr 17, 2018
This is good news for Arecont. Arecont's problems have been well known for years (e.g., most recently Worst Camera Manufacturers 2018 and starting...

The world's leading video surveillance information source, IPVM provides the best reporting, testing and training for 10,000+ members globally. Dedicated to independent and objective information, we uniquely refuse any and all advertisements, sponsorship and consulting from manufacturers.

About | FAQ | Contact