Synx Hive

Technology Introduction

The Collective

The concept of a new collective web that can handle IoT and more intelligent services in a way where the network itself can evolve and grow with time, is the holy grail in networking architecture. 

Many initiatives, both in academia and private research centers, work to solve some of the challenges the internet is facing. For instance the Solid (Inrupt) project led by Sir Tim Berners Lee that works on a web for all humans, where the ultimate goal is to share information among people with individual control. Then you have the Quic working group initiative that works on new HTTP/3 standard to make the internett better suited for IoT and fast data distribution using UDP. 

We at Nornir have also been working on distributed technology for many years with the intent to solve the limitations the web has today. But compared to the projects mentioned above our journey had a different starting point and goal: We target machines, machine society and focus less on humans and their needs. The result is a collective machine network concept named the Real Time Web. Our product suite is called Synx Hive, which can be tested at

Data vs Information

If you ask Wikipedia about Data you’ll get “Data are characteristics or information, usually numerical, that are collected through observation”. This is a kind of misconception when you view it from a machine perspective. Machines look at data and information differently. Information consists of structured historical data, while data is the value content of the information before it becomes historical. Technically speaking the data exists only before it has been stored into a database or filesystem, then it becomes information. Information is a kind of “humanized” historical data. Useless for machines but necessary for humans to build knowledge. Machines don’t really care about information and don’t need information to live among humans.

To understand machine networks like a collective network we will need to distinguish data from information. Human brain cannot process data fast enough and need to structure data into information and do lookups when needed. The Web is humans greatest invention, the biggest “database” on information today. Web addresses millions of databases (files) that have stored structured information that can be retrieved using links following their URL (addresses).

A machine network is about distributing fresh data in a network to other machines that consume and act on the data. This is done in real time. The same data can be used differently and it can trigger different processes. Data is normally captured and generated by IoT sensor devices that transform readings into alphanumeric data for distribution.

How and what these data will be used for is only known to the individuals, and the data may change context and transform into knowledge differently while it traverses through the network value chains. Some data may end up being important for millions of users while other data may not be used at all. The only thing that we know is that billions of data packages will end up in millions of different use cases in the network every day.

The collective network uses data links to address and transform data in the network. Each morphic service provider may find and link to a data source offered by another morphic service provider. The link is structured like a sentence with a subject, predicate and an object. Following the Semantic Web standard.

For a normal person that is not into Semantic Web, the triplet notation is analogous to how you build up a sentence using normal vocabulary.

The predicate is like a verb, it is something you do; like “I’m driving a car” or “you are reading a newspaper”, where the verb is “driving” or “reading”. In machine language this “verb” is called a predicate. 

Using links in a collective network is straightforward. You create a morphic service name which is the “Subject” which includes the data model. Then you link to another morphic provider data element which becomes the target data source (object). 

If you need to do something with the data you add a predicate to the link. With a series of links you can construct a story (of sentences) and intelligence. Using the collective you can put this intelligence into any device (network resource) and connect it to the rest of the web for knowledge preservation.

Introducing Morphic Services

The name “Morphic” is inherited from the fields in biology that explain morphogenesis and organizing fields. Technically speaking it means that a system can inherit behavior and logic based on tuning in the right data channel. It’s like tuning in a radio channel, and when you want to listen to something else you simply change to another frequency.

Morphic services differ from traditional microservices in many ways. One key difference is the use of Morphic Architecture Design (MAD), a multi layer architecture design method invented by our founders Paal Kristian Levang and Henrik Silverkant, that defines the Synx tools (Synx Hive) and how they operate. Traditional microservice operates in one layer of existence, “what you see is what you get”. The entity that wants to send or receive data from a microservice needs to know the data structure in advance so a programming interface (API) can be implemented to secure the communication. Morphic service does not use API implementation and the data structure may change at runtime while clients have an active connection.

Morphic service is designed to support semantic web to create AI collective and the bidirectional linking is supported by a distributed operating system named Synx BIOS. Synx BIOS segregates the communication layers and provides different access control on network resources, domains and morphic services. There are no code libraries or installation needed on network resources when communicating with the collective. Synx enables both stateful and stateless HTTP/HTTPS/Websocket communication over enhanced TCP stack.

Introducing Network Ghosts

Real Time Web (collective network) works much the same way as the World Wide Web and is backward compatible with the current TCP stack. But, there are some slight differences.

First of all the lower communication stack levels (ISO layers) have been enhanced by SynxBIOS which can execute and send data up and down the layers and segregate the access control and replace using API at the application layer.

Second, the connection point of a device (network resources) that connects to a specific IP address/URL is always connected against his distributed ghost. So instead of establishing a communication against a service provider platform, or web server with session handling, all this has been taken care of by the network kernel. The ghost memory entity acts as a remote proxy for the connected device and is created at runtime on connection and it disappears on deconnection.

The footprint is small since the ghost only exists when there is data that will be transferred to and from the device that has been authenticated, and the data structure is inherited at runtime. The connection between the client and his ghost can be stateless, stateful or combination of these. The client can also use HTTP, HTTPS, socket or any other combination while communicating.

Communication against a ghost entity is protocol independent and new protocols can be added. The ghost entity can also change behavior and data structure while on active communication, meaning the data structure is not fixed and can be altered by the service provider. For more information on how to communicate with the Hive collective visit

Ghost VS Digital Twin

All network resources like clients, servers, gateways, IoT objects, mobile applications or anything that wants to communicate with the Hive will always connect against his unique ghost. Ghosts differ from the concept of Digital Twins in some areas.

Ghosts can be used to create digital twin services, but also extend the digital twin concept a step further. Ghosts operate in multiple layers in the communication stack. So the top layer (data layer) may work similar to a traditional digital twin network. But a ghost can also gain access to the lower stack layer and may receive events up and down these layers at runtime. The other layers provide contextual data to players in other areas of the network ecosystem.

If you look at a communication network ecosystem you find several “passive” players like hosting providers who maintain the physical hardware like servers, network routers and domain name services etc. Then you have security providers that do all sorts of monitoring, blockchains and encryption algorithms in the network. Then you have the application and service provider who create web services and applications that can be accessible and addressed via domain names (URL). Synx technology is designed to be an open decentralized machine p2p network operating system.

So to be able to secure data between two unique endpoints, other players in the network ecosystem cannot access the data layer, events up and down the stack layers controlled by Synx need to provide a method that legally can intercept the communication. Ex. two clients are sending messages to each other using a service that belongs to a morphic provider. The morphic provider can anytime send a Synx command to kill the active connection on one or both of the clients end points. So even in a p2p network (no server or middleware logic) the client can be disconnected from using a specific service.

Synx also supports moving ownership of ghosts and network resources, handles read access and moves ghosts between services dynamically at runtime. The ghost follows the user’s signature and can change the behavior of the “digital twin” dynamically. This way a ghost can gain full ownership and network accessibility. The network becomes more secure and robust for changes and ownership on resources can be moved around in the open heterogeneous network and gain (allocate) CPU and memory resources from providers with lower layer access levels.


Practical tests show that a collective network can reduce development and maintenance cost by 50 times compared to using traditional development methods that use message queues and centralized hubs for IoT distribution.

Hive collective is based on sharing data on changes in the network and only to a recipient that is active in the collective. This reduces network traffic by more than 50% and is a better option when you want to develop a greener smart city solution.

Using Hive collective on Internet of Things devices like TV, smart phone, intelligent coffee machines autonomous vehicle among others, the manufacturer does not need to install any prearranged program into the device on shipment. Instead it can be created as a service and the devices can be integrated into third party service providers ad hoc by just a click. The maintenance cost on the value chain is drastically reduced and simplified. Any upgrade to the device can be done remotely from an online service.

Using Hive tools enables GDPR (General Data Protection Regulation) on communication. This means that the consumer controls his own data in the network. GDPR is one of the main challenges any future ITC project is facing since data from the user needs to be controlled and owned by individuals. Many IT service providers do not take this into account when designing a solution.

Per design the collective network does not persist data so there is nothing to steal if someone is successful to hack the system. The collective uses a kind of blockchain type of storage distribution. Any hacking attempt will be difficult since the data is spread around the network and will make no sense to the hacker. On top of this we use data encryption and dynamic token on the communication. Building security solutions is an ongoing thing and will always be a process to stay on top of the challenges the network is facing.

Using Synx tools will make consumers’ lives easier. Let’s say that you want to move to another apartment in the near future and you have a lot of smart devices that you need to leave behind. Typical intelligent washing machine, TV, fridge, oven, windows, door bell, light system etc. With Hive tools the ownership of these devices can be transferred to a new person. The new person will then be able to control data from these devices and choose which services to add them into. This feature is not possible in an easy way using traditional methods, where the user will end up reprogramming and do new installations manually for each device.

Data providers need to program interfaces (API) when sharing data with each others. Using Hive collective this interface is replaced with links and commands against the network. The effect is that developers can integrate with third party services much quicker (minutes instead of days).

With the Hive collective morphic providers can integrate with past, present and unknown future services (services that do not exist yet). This can be done without changing anything locally. A practical example can be that a service provider is creating a light switch to turn off/on users’ lights in his living room using his mobile app. So one day into the future, the user is purchasing a new intelligent LED light produced by a new company. The mobile application will work together with this new LED light system without any new installation or upgrade to the app. The collective makes it possible to connect past, present and future services together.