What types of tasks can Eli learn?
Eli is designed to learn and outperform 99,999%... of humans' specific dexterity.
Eli is designed to learn and outperform 99,999%... of humans' specific dexterity.
No. Eli is designed as a zero-configuration and self-calibrating system.
Eli can handle a 10Kg load on the entire effector.
Eli will weigh about 50kg / 110lbs.
The short answer is yes. The extended answer requires some context.
Currently, we are not designing Eli to compete against humans in the specific impulse. This will require additional innovation in chipset design, algorithm and mechatronics. We have already started working on a front-end draft for a 7nm FinFET AI-specific ASIC. However, the first generation of Eli will fully rely on multiple FPGA chipsets, not on ASICs. Now, our main innovation is reflected in the speed of the software decision tree using FPGAs. On ASICs, the speed of the decision three and power efficiency will increase, while the chipset cost will decrease above a certain quantity. Building ASICs makes sense above 10,000 units/year.
The engineering of the entire system makes Eli the fastest generic dexterity robot, designed to outperform humans in the average context, not in the specific impulse of motion. Depending on the task, this means that in a 15-minute timeframe, a human might be faster than Eli, but in a 24h timeframe, Eli will most likely be about 3x - 10x faster than a human, as some tasks require less computing power in the decision tree, while others require more. We will increase the speed in the specific impulse with our future ASICs designs.
Eli can be used both indoors, and outdoors. At present, an IP67 water-resistant rating is enough.
In the default configuration, it needs a rail-based infrastructure, which makes sense in the context of highly autonomous facilities and sites. We might develop a mobility platform, but at this stage, our main focus remains the rail-based infrastructure. Beating humans in mobility, power, efficiency and silence is a very difficult task that requires huge innovation in mechatronics and compliant mechanisms.
The rail infrastructure can be easily implemented in factories, agriculture, on construction sites, HORECA use cases, entertainment etc.
Eli uses custom-designed three-phase AC synchronous servomotors. However, as you might expect, there is an inverter that will allow Eli to be also powered up from a single-phase grid or EVs battery pack. When railed up, Eli is powered over an IP68 water-resistant power-transmission line, through rails.
Not all of them because Eli is not designed to solve full feature mobility. It is designed to solve general dexterity. If the blue-collar mobility level can be solved through the railing, then yes: Eli can successfully replace and outperform blue-collar workers.
Eli's computing hardware is built around RISC-V / DDR5, interconnected on PCIe with DSPs, FPGAs and a NVMe persistent storage. The embedded subsystems will run on the STM32 family.
Eli runs OS 4, a real-time operating system whose user space runs on top of a heterogeneous kernel. This means that the kernel will run multiple instruction sets (RISC-V, DSPs and FPGAs). The software will be updated Over The Air (OTA) through WiFi or LTE/5G. Because OS 4 operates as an event-driven RTOS (Real-Time Operating System), all embedded subsystem can run lightweight versions of OS 4.
If you have previous experience with robotic arms, you might be apprehensive, as classic robotic arms need a cage with safety rules, then they need effectors, programming, calibrations and fine-tuning etc.
Eli is much easier to integrate and it is not your job to operate it. The single most "difficult" action is mounting the rail system. Once you are done, just mount Eli on the rail and plug-in the power. Eli is self-calibrating and 360° obstacles aware. Now, you only need to define the requirements (macrotasks) in natural language, as chat messages in a smartphone app (OS 4 s virtualized in a Smartphone App) or classic emails. You can even define the task from a normal landline phone call in voicemail. If there are questions, you can address these questions over chat, email or voicemail. You will be notified when Eli learned how to solve your described macrotask, with a demo video attached to the chat window or email. Provided you don't have additional requirements and are completely satisfied, you can confirm the macrotask inception.
At its core, OS 4 is a heterogeneous, asynchronous event-driven, real-time operating system (RTOS) built in Rust. Let's take it one by one:
1. Heterogeneous means that the operating system runs on multiple types of processors at the same time. The OS 4 kernel will run on RISC-V, DSPs and FPGAs/ASICs instruction sets at the same time. We need that for a massive increase in performance at a lower power consumption.
2. RTOS - The task scheduler algorithm of an RTOS is different from the scheduler of a generic purpose operating system (GPOS). In an RTOS, a task must be completed in a certain amount of time or the system will completely fail. There are some important advantages and reasons we build OS 4 as a soft RTOS.
The main disadvantage of RTOS is the limited amount of tasks that can run at the same time. However, not only is the limited amount of tasks NOT a disadvantage to our model, it is exactly what we want to achieve: run a minimal amount of tasks and obtain a purely deterministic runtime engine. We don't need to run hundreds of inefficient processes delivered as applications. We need to run a single application that will never consume more resources than the system has to offer. In our case this application is a game. Think about old, rock-solid GameBoy handheld game consoles: their hardware has always loaded and run a single game at a time. The difference is that our game is not a simple GameBoy game, it opens up a gate to an entire world of rich interactions. There is no way to install applications on OS 4 even if you would like to do this. In the unlikely event you have to reboot, it only takes seconds. We deserve a rock-solid UX because no one should be punished to install and manage applications in order to communicate, solve problems or consume content. A single rock-solid game engine running on top of an RTOS is the only application you need. This single application handles communication, renders content and bootstraps an AI stack that handles any kind of logic we might need to solve problems. Anyone can build a game or any other service inside our game, by simply uploading content and expressing logic in a natural language, as opposed to computer language.
3. Asynchronous event-driven - There are 2 kinds of scheduling solutions in RTOS: event-driven and timetable scheduling. Reacting to events directly rather than imposing some arbitrary schedule is a huge advantage. There are millions of reasons things can go wrong and some events arrive later than you might expect. The OS can continue to do whatever it needs to do until a certain event arrives. An event-driven deterministic runtime offers an eventually consistent solution while maintaining a deterministic runtime.
No. It is platform-independent. Our main goal is to make it available everywhere and in any kind of context. That's is why OS 4 is designed as an agnostic operating system. In the end, OS 4 is a game that can run everywhere. OS 4 can run:
The reason we afford to build Eli for all these platforms is that we only have a single Rust codebase that can compile and run on all platforms. The higher-level code uses a bottom layer that can abstract at the same time the bare metal and the different platform-specific functions at a very low cost.
A short answer would be: yes, it is all of them. But if you want to better understand it, you need to start thinking about it as a game. Today's games need a console or a platform to run on. The main difference is that OS 4 can run directly on our hardware built to offer a lot of computing power for neural networks or other AI-specific applications. Think about OS 4 as a game that can run almost everywhere (bare metal, embedded systems, Windows, OSX, Linux, iOS and Android). It will even be possible to make it run in consoles like Playstation or Xbox at some point in future.
A game can bring an entire universe to life. In a 3D game, we can build:
We let developers continue to build the headless Web on server-side applications and empower content creators to build the virtual space inside the game, with audiovisual content and ideas expressed in natural language. This is a unique and highly innovative way to emerge reality into a virtual spacetime. Up to a certain degree, we already emerged reality into the virtual (services, social networks, online shops, messaging etc.) but in inconsistent, primitive user experience. There are a lot more opportunities, like bringing the autonomous robots into the virtual spacetime in order to train them and then let them build things in real spacetime, or bring intellectual power into the virtual to solve real problems.
In the end, OS 4 is also an operating system because it operates an entire system on top of bare metal.
In OS 4 you'll find an AI/HI Marketplace where you can subscribe to AI services or contribute to the training system. The subscribers can pay on-demand or monthly fees for artificial intelligence services, the contributors can train AI models and earn money as lifetime royalties. Every time a subscriber pays for AI services, contributors get their royalty shares. 90% of subscriber revenues are going to contributors. These services are structured in the form of nano, micro and macro tasks. Nano task can be recombined to create new microtasks, microtasks can be recombined to create new macrotasks. This will create an entire economy of intellectual property. We underestimate how man's insignificant gestures can be transformed into intellectual property. The intellectual property is cryptographically traced in a hybrid distribute-decentralised public database (you can name it blockchain if you want to because it is based on directed acyclic graphs (DAG) but operates a bit different than what you might know about a classic blockchain). This will help contributors to cryptographically track the exact amount of shares they have to receive. The shares can be converted into fiat currency or streamed directly into the contributors' Lightning Network wallet. Converting into fiat will decrease your revenue share from 90% to 85% because we have to operate additional fees and because of the KYC/AML policy complexity.
Yes, we have and it operates pretty much like the AI/HI marketplace. What is custom to our content marketplace is the way we deliver content, especially video content. We are talking about a DD-CDN (distribute-decentralised content delivery network) with a custom WebRTC implementation in signalling and traversing symmetric NAT that can help us eliminate a lot of redundant traffic over the network and improve the overall experience.
There is no prescripted storyboard, the future is unfolded by the entire user base. The user interaction bootstraps an entire economy, community, gamification emerged from reality. Some say that life itself is a game. This is the exact type of game you will find here, but in an immersive, virtual experience.
99.999% is open source. The only process running in OS 4 that we'll not open source is the AI process. Actually, it would be impossible to do this because we bootstrap the main AI process in our lab way before your OS is booting up and it constantly rewrites itself. It will even write new processes or destroy processes on the flight. This unique ability doesn't come from a static code, as we don't run a deterministic binary process for the AI. The main AI process is fetching the AI process state from the network as soon as it initializes (initialization is the only deterministic component of the main AI process). It will also push new updates to the network. The performance of our AI doesn't emerge from a few lines of code or from trained ML models but from the engineering and fine-tuning of the entire Deev system. But we'll open source anything else for transparency reasons. Since the AI processes are jailed, there is no reason to worry about unauthorized access of AI to a private resource. Users can fully control the access level of the AI.
No. Kepler only weighs 240g. Drones below 250g don't require a license.
In addition to official no-fly-zones, we will define other no-fly-zones. You should expect a conservative approach. Don't expect to fly in high-density population areas, private areas, above crowds or unknown people. In some high-density areas, we might let it flight maximum 2.5m above the ground if all safety conditions are meeting up. This is enough for some personal footage. We'll constantly work to increase fly zones diversity while maintaining safety.
No. It is impossible to manually fly Kepler.
As long as EULA and other end-user legal agreements are not broken, we are fully responsible for the accidents. Security is our main priority and we have put a lot of thought into this. Every single flight has an ID in our databases before takeoff. Any kind of abnormality, even slighter ones caused by small crosswind will trigger an instant upload of the black box database in addition to local storage.
The hacking scenario is always on the table. There are many possible vector attacks, but no more than cases met in modern avionics. Security of OS 4 is the most important topic for us and it's extremely difficult to run malicious code. The entire topic of OS 4 security will always be transparently discussed in the open-source community.
We implemented a one-tap takeoff system. You can trigger takeoff by tapping a button or by a "takeoff" command expressed in the local chat window or a remote chat window on your smartphone. If the takeoff conditions agreed by all sensors are met, the takeoff process is initiated. The conditions are extremely conservative. We want to make Kepler the safest autonomous smart drone. A free radius of at least 2m on the sides and 3m on top is the minimum condition for takeoff. Landing process can be triggered by hand gestures or by natural language in a remote chat from your smartphone.
Nothing can stop you, if you want to. It runs OS 4. Please read more about OS 4.
Sending your question
Deev @ 2020
This privacy policy describes how your personal data is collected and used when you visit www.deev.blue (the “Site”).
What personal data do we collect?
When you visit the Site, we automatically collect and store data about the browser you use, your country and city of origin, as well as information about how you interact with the Site. We do not use cookies or web beacons and we do not collect or store IP addresses. All IP addresses are anonymised. Our local IP lookup system drops out the IP address immediately after the lookup operation is performed.
How do we collect your data?
You directly provide us with all data we collect. We collect and process data when you:
How do we use your data?
We use the contact form data to privately reply to the provided
email address, or to use it in public / private defence situations,
if necessary.
We use your subscribing data for marketing communication.
Behavioural advertising
We use your data exclusively for marketing communications that you requested.
Do we share your data?
We highly value your privacy and we will never share your data with any other third parties, except when legally required to do so.
Your rights
If you are a European resident, you have the right to access your personal data and to ask that your information be corrected, updated, or deleted. If you would like to exercise this right, please contact us through the contact form.
Changes
We may update this privacy policy from time to time in order to reflect changes to our practices or for other operational, legal or regulatory reasons.