When to consider a solution IoT?

Say IoT again .. I DareYou

Say IoT again .. I DareYou…

 

I guess that IoT is reaching the point on the hype curve where everybody wants to be associated with it. This leads to mind boggling, toe crumpling news like this one:

http://blogs.msdn.com/b/windows-embedded/archive/2014/05/13/restaurant-chain-transforms-service-with-the-internet-of-things.aspx

My brother, working in Big Data, has been rolling his eyes at news like this the last few years and told me the “hardcore” Big Data people is trying to find a new word for what they do, Big Data has been so misused, today it’s considered overloaded. Now I do appreciate all the marketing help we can get on lifting the general knowledge about IoT, but I do not recognize a client-server POS solution like the one mentioned as an IoT solution. Sorry.

But when to consider a solution IoT?

But it actually made me think about when DO I recognize a solution as an “real” IoT solution. Is it the number of devices, the way they are connected, the resources on the devices connected… Well it should at least involve a device that is Internet connected. Would a Mac or Windows PC ever be recognized as an IoT solution.. and actually I don’t know. Wikipedia doesn’t help … It’s kind of the same category (though not as deep) as when is a living being considered a living being.. when you see it you know it.. but it is very hard to actually give a full covering exact definition.

 

 

Nabto v1.0 fail – trying to get consumer attention in a B2B world

What is nearly forgotten even internally at Nabto is that the first version of Nabto was actually a big fail. The first version of Nabto was marketed as a P2P remote access webserver.

It’s all about “USERS … Users… users…” 🙂

You could install our software on a PC bundled with a webserver (apache) and now you could remotely reach this PC from a client (PC, tablet, mobile phone etc.) from anywhere in the world in a P2P style.

The idea behind this was to try to create a free offering for consumers that would be widely adopted by users and thereby getting a “technology user-base” that later on could be used to convince the industry players to adopt our platform.

Who cares?

Man…. This line of thought was pretty wrong…  first of all creating user value for something that mainly was designed for B2B showed to be a lot harder than what we imagined and secondly we found out that the industry didn’t even care about user adoption. The industry was used to pushing technology to users and we never had anyone ask us how many users was using our system. All in all we were just creating a lot of noise concerning what we did (B2B could understand the connection to the user system) and also we created a lot of technology obstacles.

Pivot

The guy who finally convinced us to do otherwise was Preben Mejer. We showed him our system and he basically told us “cut the crap… if you want to target to B2B vendors of devices.. create a product that is directly focus to them… they don’t care about your user-base anyway…” … and surely they didn’t….

The B2B device vendors have their own user-base. If they want this user-base to get something new … they are so used to just push it to them… No one has actually ever asked us how many users has been using our plugin… and how big is our install base…

Actually this was not the last time we did a pivot.. (even though the next time it was a much smaller “turnaround”, maybe not even big enough to call it that).. but I’ll maybe talk more about this later on.

C++, why oh why …

TCC2

(Couldn’t help putting a picture of a very old C++ compiler … it popped up when searching C++ and I’ve actually used it..alot)

People ask us why we chose to use C++ as the main programming language of our system, especially the central parts. The argument is that many newer and “beautiful” languages could have been chosen instead, being Erlang, Java, Python, Haskel (and before the religious warm-up to start an argument, I don’t mean anything by the order or leaving some other good choices out it was just the names that popped out of my head sitting down writing this, after the next coffee-break something else had been presented).

 

When I just funded Nabto I actually left a Java environment (The company Logiva who market Signflow), so Java could have been a good choice. But the environment I left was mainly administrative web and cloud-based software (Businesss Procces Management with a main focus on invoice and procurement handling) and Java was chosen for this reason, speed and memory footprint was not the main concerns (well the reason for the choice done around 2003 probably was a lot more complex and random than that, but in retrospect this sounds nice).

I’m not trying to put Java down, you can definitely create small and fast programs, but as a programmer/architect (call it what you want) the layout and management of memory is something that Java tries to hide and abstract. From start on in Nabto we wanted to create a system that was very scalable. Everybody who works in the Internet of Things industry knows that everybody is talking billions of devices, and we set out with a vision to be the ones who could support that kind of data without having to build supercomputers. I can remember an assignment at University on trying to create the fasted algorithm for multiplying large matrices and even try to do it in parallel. The naive implementation is just to create two arrays [n][m] and start multiplying and adding the rows times the columns. One of the surprising (well at that time) issues to me was that rearranging the memory layout of the second matrix so that the elements in columns was right next to each other meant a factor 2 to 4 depending on architecture.  The reason is that a CPU cache miss is expensive and multiplying large matrices so big that the rows cannot fit in the cache memory will cause that iterating through columns (in the naive implementation) will amount to a cache miss every time, and a cache miss is equal to reading a chuck of second level cache memory (or worse) a the position of the column. Rearranging the columns so that they are aligned linear in memory will only create a cache miss and chuck load when all the elements has been used. Maybe a lot of stuff has changed since then… but being close to the bare metal and have control of memory layout probably still mattered. C++ seemed like a choice where we still had some abstraction and powerful tools/libraries but still to some extend had control over memory layout and what actually would be running i.e. the machine code.

Currently it has proven to be an okay choice… we are starting to handle a lot of devices and resources begin to be an issue. Yes we could just throw more servers and more memory at the problems, but why if we don’t have to. We can run a lot of devices on a small simple setup with a few C++ servers.. and still we are able to scale with more CPU power etc. (probably more on that in a later blog)

Our software has to work with both PC browsers, Tablet and mobile phones setups, and normally if you want to incorporate something into these kind of environments C++ is a good choice.

Another issue we havn’t thought much about is compiled programs are harder to reverse-engineer. We are starting to have customers and partners that are quite loosely connected to us. Transferring non-compiled (interpreted) programs could be an issue. Yes it could be handled with the appropriate lawyer stuff … but I’ll rather send a compiled somewhat obfuscated program… and yes everything can be reversed engineered… but it’s nice to know that if somebody wants to give it a try, they need to really invest a lot of time doing that.

The big mistake we did was to think that the embedded device industry had moved or would move to C++ soon… bummer… we actually had to rewrite most of our early C++ device and convert it to raw C… why you will find some of your software is called uNabto (micro-nabto) … basically that’s the BIG Nabto C++ code rewritten in C (hence micro)… I might write more on this subject later on…

Disruptive Internet of Thing … what?

bulb-disruption

I know… Internet of Things is still in its infancy and we say we are doing it disruptive… to be disruptive you would need to disrupt something that’s not even really had traction yet, that doesn’t sound right.

But I do think it’s the best way to shortly communicate what we actually are doing at Nabto. If you analyze all IoT academic literature and our competitors what you will find is that everybody presents the same general setup, devices that have minimal state, post data to a cloud server with data storage and GUI computing power and presentation logic, and clients mainly being browsers that interface virtual representations of the devices through this (web) GUI (please tell us if you have found something else). You could argue that putting a web server on an embedded platform is also Internet of Things but let’s try to leave this out of the picture so we don’t complicate things.

Instead our approach to IoT is that what you want is direct interactive access to your device. This is done by (re)using the standard P2P technology you see in VoIP systems like Skype but also internet games etc. Creating such direct connections maximizes reuse of existing internet lines and minimizes resource usage on the central (cloud) system. It’s definitely not the easiest way … but we do think in the long run its THE way… if not for all .. then at least for a majority of the Internet of Things out there to come …