#Unikernel Systems
Explore tagged Tumblr posts
Text
ECE 5984 Project 1: Unikernel Filesystem solved
1 Introduction In a unikernel, a single application is statically compiled with the necessary libraries and a thin operating system layer into a single binary that is run as a virtualized guest on top of a hypervisor. Due to the fact that (1) one unikernel runs only a single application and (2) the hypervisor takes care of isolating the unikernel from the rest of the system, the unikernel…
View On WordPress
0 notes
Text
OSv - the operating system designed for the cloud
Home Getting Started Learn More Community BlogOSv is the open-source versatile modular unikernel designed to run unmodified Linux applications securely on micro-VMs in the cloud. Built from the ground up for effortless deployment and management of micro-services and serverless apps, with superior performance. Source: OSv – the operating system designed for the cloud

View On WordPress
0 notes
Text
Tezos Foundation Backs Blockchain Research by Inria and Tarides
Tezos Foundation Backs Blockchain Research by Inria and Tarides
The Tezos Foundation today announced two fresh partnerships to encourage the development of technologies underlying the Tezos protocol. Tezos will be backing the French National Institute for computer science and applied mathematics, Inria, and development company Tarides, also based in France.
The Foundation said, in a statement: “The Tezos Foundation is committed to funding relevant research…
View On WordPress
#blockchain#Citrix#Coq#Cryptography#Docker#IMDEA#MirageOS#Network#Ocaml#OCAml Labs#Paris#Smart Contracts#Software#storage#Tarides#Tezos#Thomas Gazagnaire#Unikernel Systems
2 notes
·
View notes
Photo

This unikernel-based operating system is a faster, safer way to run Linux apps https://thegadgetflow.com/blog/unikernel-based-operating-system/
1 note
·
View note
Text
Vulcan: Volentix Decentralized Cloud

Vulcan is the Roman god of fire and forge. The Roman concept of the god seems to associate him to both the destructive and the fertilizing powers of fire. In the former, fire burns and is destructive, in the later, the fire has been tamed to the will of man.
Vulcan, as it is fire, is also the god of the forge.. Wikipedia defines a forge as: ‘The forge is used by the smith to heat a piece of metal to a temperature where it becomes easier to shape by forging, or to the point where work hardening no longer occurs.‘ Consider this our metaphor for Vulcan. Only our forge is the global compute infrastructure, the smith is the developer, and the metal being forged are the Dapps they are developing. Vulcan does the rest.
Decentralizing the Cloud
Today we have three players in the cloud space. Three, for an industry that is still in its infancy. Three is not a lot of choices. Three feel centralized and constrained. Three is a problem. Three concentrates power on a network that is designed to be decentralized and open. Three is a threat and we should consider it very seriously.
On the other hand, Vulcan is designed to be a decentralized cloud platform that extends the cloud services marketplace to any and all who wish to monetize on their computers. To be clear, you can deploy Vulcan onto your laptop or into a datacenter with thousands of computers.
In addition, to open up the cloud marketplace, Vulcan lowers the barriers of entry for DApp developers. Vulcan will manage the deployment, scalability, replication, security, and remuneration so that developers can spend their time developing rather than infrastructure, licensing, billing, versioning, and all the other odds and ends that are wasteful.
How The Sausage Is Made
I don’t want to get too deep into the technology, but I feel it’s important to provide a little more details that some may wish to skip. The other reason is to ‘shout out’ the primary technology we are using.
Kubernetes is the new cloud. We chose it for a million technical reasons, but most importantly we chose it for the reasons:
Its open source
The community is massive, responsive, professional, and world class
Its tenets of design are laser focused. Basically, Kubernetes knows what kubernetes is.
It's part of the Cloud Native Computing Foundation
Architecture features required for Vulcan are already native in the platform.
Security First
Vulcans primary requirement is the need to ensure security. Data center operators need to know that whatever is being deployed into their data center is isolated and controlled. DApp developers need to know that their DApps cannot change and cannot be stolen. Consumers need to know that the DApps they are using to protect their best interests and adhere to the Volentix principles.
With Kubernetes, we are able to manage security all the way through the network stack as well as isolating processes. Additionally, third parties may also create security DApps that will harden the system even more. Finally, container technology continues to improve frequently. Kubernetes, and as a result Vulcan, is already built to accommodate this! For example, VDex will not be using Docker as its container due to security issues and will be using a library operating system instead. All the technical dodads such as ingress controllers, unikernels, P2P encryption, service isolation, and flurry of other options, found deep down in the recesses of Kubernetes and its ecosystem, will automatically be deployed and managed for you…This is pretty nerdy stuff indeed…
Vulcan UI
Vulcans UI will be a ‘pluggable’ service in Verto that provides tooling to enable Vulcan operators the ability to monitor and manage the Vulcan compute instance. The tooling will support both novice and expert level operational best practices and services. In the most basic example, an operator will need to manage, when to update Vulcan (or one of its DApps) as well as,how much VTX they are getting rewarded for participating on the network. In an advanced case, the operator will require management of Vulcan at a much lower level. At this level, operators are able to, but not limited to:
View/sort/filter system logs and metrics
Create alerts
Capacity management
Whitelist/blacklist services
Note that the UI is a ‘pluggable’ service inside Verto. In this way, other services are able to create Verto UI’s for their services as well. The service provider is able to use Vertos advanced features, such as data management, and user management, to create interfaces without the need for backend services to support them. As a result, additional barriers to entry are removed from the DApp developers.
Here Comes VDex
VDex is the first DApp that will be developed for Vulcan. The reasoning is simple. We need our VDex operators to have a safe, and secure environment that is easy to use and provides them with sufficient tooling to manage their environment.
As Vulcan is deployable on a single computer all the way up to thousands of computers, so to is VDex. Things like scalability, replication, failover, throttling, and upgrading are managed by Vulcan for you. In this way, operators are able to scale their facilities easily as the network grows.
Additionally, being that Verto is ‘pluggable’, a rich UI for traders and researchers will be made available. This Verto service will provide access to the backend VDex application as well as leverage the data management in Verto.
What the Future Looks Like
The future of Vulcan is envisioned to be a complex ecosystem off DApps working in concert with each other on a decentralized cloud computing infrastructure. A global computer infrastructure in which we have decoupled ourselves from any one vendor, through decentralization, in a way that enables the democratization of the cloud.
A democratized cloud is one that supports the smallest computer needs all the way to the most advanced. A cloud in which reward is distributed fairly and more broadly. A cloud where providers can compete and users are rewarded with more and better options than they can today.
This democratized cloud, Vulcan, will also provide the ways in which contributors choose to be rewarded for their contributions. These contributors are considered to be:
Vulcan Operator: These operators will remuneration for the compute resources they provide. Once the system develops, these operators will be able to set prices and define services. For example, a desktop instance of Vulcan may be cheap since the operator provides no guarantees other than the defaults of Vulcan. In another example, the operator is managing multiple regions with multiple centers each with thousands of computers. These centers have physical security and are compliant in each region they serve. For good reason, this operator will be able to charge more for their services than the desktop operator. Regardless, both have their place in the ecosystem and both can be rewarded for their contribution to the community.
Open Source Developer: Developers should be contributed for their contributions easily using the tooling that they are familiar with already. Vulcan, through its licensing management services, which has not yet been mentioned but will be covered in another post, can attest the developers has been used somewhere, for something, on Vulcan. This means that, not only will DApp developers, or service providers, be rewarded when others use their service, the underlying open source framework developers could also gain reward. In this way, Vulcan has decentralized the development of applications while attesting to remuneration and compliance to anyone parties needs.
DApp Provider: Up until now, we have talked about the DApp developer, however, what we mean is DApp Provider. In many cases, the DApp being designed, tested, and built requires an entire team to manage. In Vulcan, DApp providers set the terms and conditions of their application to whatever meets their business model. For example, a DApp provider may have a free model and a paid model. They may have 24-hour support with an SLA, or they may have none. In short, DApp providers set the reward they want and Vulcan manages everything in between.
Wrapping Up
Vulcan is tooling that is designed to be a decentralized cloud in which Dapps can reside
Read More->https://volentix.io/en/vulcan-volentix-decentralized-cloud/
2 notes
·
View notes
Text
NanoVMs unikernel based operating system for Linux apps | #Tech #Gadgets
NanoVMs unikernel based operating system for Linux apps | #Tech #Gadgets
Want to secure your servers against cyberattacks once and for all? Then check out this new operating system from NanoVMs. It uses unikernels to run software faster and safer than Linux. And it can be deployed to public cloud providers, private data centers, and more. Keep reading this blog post to learn more about this exciting operating system. Keep your information secure and your programs…

View On WordPress
0 notes
Text
OSv – Linux binary compatible unikernel for virtualized environments
https://github.com/cloudius-systems/osv Comments
0 notes
Text
2018-03-21 21 LINUX now
LINUX
Linux Academy Blog
New Version of LPI Linux Essentials Course
Linux Academy Weekly Roundup 110
Announcing Python 3 for System Administrators
Linux Academy Weekly Roundup 109
The Story of Python 2 and 3
Linux Insider
Google Opens Maps APIs and World Becomes Dev Playground
New Raspberry Pi Packs More Power
SpaceChain, Arch Aim to Archive Human Knowledge in Space
Deepin Desktop Props Up Pardus Linux
Kali Linux Security App Lands in Microsoft Store
Linux Journal
VIDEO: Cooking With Linux: Lots and Lots of Word Processors! The Tuesday Linux Journal Show
GStreamer Major Release, OpenBMC Project, Playerunknown's Battlegrounds Free Mobile Version and More
diff -u: Intel Design Flaw Fallout
Tails Security Update, Companies Team Up to Cure Open Source License Noncompliance, LG Expanding webOS and More
Weekend Reading: All Things Bash
Linux Magazine
Gnome 3.28 Released
Install Firefox in a Snap on Linux
OpenStack Queens Released
Kali Linux Comes to Windows
Ubuntu to Start Collecting Some Data with Ubuntu 18.04
Linux Today
Digital asset management for an open movie project
How 11 open source projects got their names
Google Patches All Intel Chromebooks Against Spectre Variant 2 with Chrome OS 65
Dry - An Interactive CLI Manager For Docker Containers
How to install Chevereto Image Hosting on Ubuntu 16.04
Linux.com
Keynote: A Conversation with Linux and Git Creator Linus Torvalds
Building Helm Charts From the Ground Up: An Introduction to Kubernetes
Linux Powered Autonomous Arctic Buoys
State of AGL: Plumbing and Services
uniprof: Transparent Unikernel Performance Profiling and Debugging
Reddit Linux
Is there Instagram schedule software for Linux?
This year's Opening Keynote Speaker at Akademy 2018 is Dan Bielefeld, a human rights and Free Software activist that uses technology to expose crimes committed by the North Korea's Kim regime.
How to create Local Repository in Ubuntu 14.04
The Importance of QA
Lineage OS vs Copperhead OS
Riba Linux
How to install Zorin OS 12.3
Zorin OS 12.3 overview | Your Computer. Better. Easier. Faster.
MX Linux 17.1 overview | simple configuration, high stability, solid performance
How to install Neptune 5.0
Neptune 5.0 overview | an elegant out of the box experience.
Slashdot Linux
Chinese Companies Are Buying Up Cash-Strapped US Colleges
SpaceX Indicates It Will Manufacture the BFR Rocket In Los Angeles
Did Stephen Hawking Owe a Nobel Physicist a Subscription To a Softcore Porn Magazine?
Telegram Loses Supreme Court Appeal In Russia, Must Hand Over Encryption Keys
Orbitz Says Legacy Travel Site Likely Hacked, Affecting 880,000 Credit Cards
Softpedia
Opera 51.0.2830.55 / 52.0.2871.27 Beta / 53.0.2900.0 Dev
Linux Kernel 3.2.101 LTS
Linux Kernel 4.14.28 LTS / 4.9.88 LTS / 4.4.122 LTS / 4.1.50 LTS / 3.18.100 EOL / 3.16.56 LTS
GStreamer 1.14.0
GParted 0.31.0
Tecmint
Suplemon – A Powerful Console Text Editor with Multi Cursor Support
Goto – Quickly Navigate to Aliased Directories with Auto-Completion Support
How to Randomly Display ASCII Art on Linux Terminal
10 ‘who’ Command Examples for Linux Newbies
Gogo – Create Shortcuts to Long and Complicated Paths in Linux
nixCraft
Raspberry PI 3 model B+ Released: Complete specs and pricing
Debian Linux 9.4 released and here is how to upgrade it
400K+ Exim MTA affected by overflow vulnerability on Linux/Unix
Book Review: SSH Mastery – OpenSSH, PuTTY, Tunnels & Keys
How to use Chomper Internet blocker for Linux to increase productivity
0 notes
Text
Download SuperSU SuperSu Apk All Versions and zip Root Documents
SuperSu CF-Auto-Root is the program you use to handle origin permissions of your cellphone. For Samsung cellular apparatus, as soon as you set up TWRP retrieval and root them, SuperSu can be equally added by you away. There are just two kinds of the app you will find the SuperSu.zip document and SuperSU.apk. The apk version just like you will understand what you will install on an Android apparatus; apk files are supported by Android OS.
As I said previously, it is possible to install some other device via TWRP or this program to a Samsung. For installing SuperSU is your .zip file. When you receive the .zip type/version of the program you extract its contents and store to a folder. Furthermore, you can put in any program. The program has to be a. apk. It won't accept to install in your own Android apparatus.
The SuperSu Program Satisfy with the #1 mobile program on mobiles or any Samsung Galaxy apparatus. The program works to allow root permissions to be managed by you . If your device isn't rooted this implies, it's quite possible that this program will not be applicable dismiss it.
On the flip side, if your device is rooted you want this program that is superb. We'll show you the place to get the apk and zip variants of this main permissions program.
Matters to notice before downloading any variant of SuperSu First of all, the device you're attempting to install it must be Rooted. We've got some tutorials about the best way best to root an Android apparatus utilizing a PC or even.
This program can be obtained as also, and also a flash document a program on your smartphone; download and You Need to know that's Ideal for you
Observe your device version.
The best way to install this program on a Samsung apparatus Download SuperSu.Zip Type below and then copy it into your Internal Memory.
Switch off the unit and then input into recovery mode by simply pressing the Electricity + Bixby (house ) + Volume buttons down simultaneously.
Once you get a warning message press the Volume Up button
On"Recovery Mode", tap wipe >>> progress wipe >>> pick cache info.
Return to the"Recovery Mode" primary menu (House ) and tap Install >> choose SuperSu.Zip.
Following the installation, reboot your cellphone.
You will realize that the SuperSu program icon.
Download links The apparatus has to be rooted at this app's performance, however although this program works on all of Android apparatus.
Here would be the. Zip files of the program.
Notice: these variations can not be installed by you straight. To begin with, you need to download them extracts it, then, flash to the device. Below are the download links to the zip versions of the program.
SuperSU Root Root Documents v2.52 v2.56
v2.65
v2.66
v2.74
v2.78
v2.79 v2.82
v2.82 SR5 ( Chainfire Mirror)
The best way to root a Samsung apparatus with the ODIN system and Install this SuperSu.zip Download and install the various Samsung USB driver on the pc
Get the most recent edition of this Odin flashing program -- ODIN Download.
Download the Unikernel that is Exceptional
Extract the Odin documents
Duplicate the Unikernel files from your PC.
Now, switch off the device and boot up by pressing the Volume Down + House + Electricity Buttons at precisely the exact same moment.
Press the Volume Up button when you get the warning message to last. Do not worry.
Connect the device in question.
Launch the Odin program. You should get a message which reveals if your device is attached. But Should You not see/receive sign that is such repeat the steps over
Then, continue with the steps below if you received the message your device is attached. The term reveals"ADDED!" On the command line.
Select the PDA or AP Button and then"browse/search" to your Unikernel file, then upload it into the Odin program.
Press Start wait to finish. It could enter to bring the SuperSu permissions. Reboot again.
After restarting the device, you will realize that the SuperSu program icon.
Download root checker to verify if your apparatus is rooted.
Installing the SuperSU.apk document Below is the way to install this program in your smartphone. This process/step doesn't want a computer. It Ought to Be something you already understand
Download your choice Edition of the SuperSu.apk
After download, install and select the program
In case you prevented from installing this program in case you have not triggered"unidentified sources" in your device. Thus, when you receive the"Installation Blocked" message, then click settings and trigger"unknown sources"
The program can be installed after activating the choice of the source that was unidentified
Launch it and give it access undefined
What's more? It's an extensive review of the various kinds and variants of this SuperSu program. Because this program has to on a frozen Samsung apparatus (most notably ), we consider sharing a post in this way is of excellent, really fantastic assistance to a specified class of people on the market.
0 notes
Text
Containers vs. Unikernels: An Apples-to-Oranges Comparison
[ad_1]
I was asked recently to write a containers-versus-unikernels article, and I said, “Sure, but it won’t be the article you think it is because I share Per Buer’s sentiment that unikernels are not simply containers 2.0.” I seem them as apples and oranges. I think a lot of the confusion stemmed from the acquisition of Unikernel Systems by Docker a few years ago; they were the team that…
View On WordPress
0 notes
Text
Running Unikernels from Existing Linux Applications with OPS
Unikernels are an emerging deployment pattern that engineers are choosing over Linux and Docker because of their performance, security and size. Researchers from NEC are reporting boot times in 5ms while other users talk about how small their VMs can get – in the kilobyte range if you’re using c. Still others like OSv have measured up to a 20% performance advantage in popular databases. However they have remained out of a lot of developers reach because of their low level nature.
That is until we decided to open source a tool called ops.city (OPS). OPS is a new free open source tool that allows anyone, including non-developers to instantly build and run unikernels on linux or mac from their existing software. There is no complicated re-compilation. There is no LDFLAG twiddling or random patching of various libraries you’d never patch yourself. OPS goal is to democratize access to unikernels.
Ok – enough of that – let’s build some unikernels.
First thing you’ll want to do is download ops itself:
curl https://ops.city/get.sh -sSfL | sh
Let’s start with a short example:
Let’s create a working directory:
mkdir p
Now put this into test.php:
<?php
echo ‘test\n’;
?>
From a fresh install you’ll see that there are several pre-made packages available:
Let’s go ahead and download the php package. The package contains everything that you’ll need to build and run php unikernels but absolutely nothing more. The idea is not to strip things out that aren’t necessary – it’s more of only putting things in to make it work. You’ll notice if you get into the tarball you’ll find an ELF file along with some libraries. This was built for linux but your application won’t actually run on linux. Linux is now 28 years old and predates both commercialized virtualization and what has become known as “the cloud” – namely Amazon Web Services and Google Cloud – both of which heavily use virtualization underneath.
Now if you run the example you’ll see that we boot up our php application and run the code but if you are paying attention you’ll see that this not like Linux where it starts hundreds of programs before it runs yours. Again this is more than just replacing the init manager and applying seccomp. We’ve tailored your application to become it’s own little operating system – how cool is that?
Let’s try another one – put this into test.js :
console.log(“we are all crazy programmers!”);
This time we’ll try out node.js:
It’s important to note that we’ve only showed some basic examples here. OPS is actually capable of loading and running arbitrary ELF binaries.
If you are using docker or kubernetes now you’ll definitely want to pay attention and get involved early in the unikernel ecosystem. If you are a microservices aficionado or serverless fan you should also be paying attention as a lot of people are predicting this to be the underlying infrastructure for these paradigm changing technology growths.
So what are you going to build? Go check out https://github.com/nanovms/ops – fork/star the repo and let us know!
The post Running Unikernels from Existing Linux Applications with OPS appeared first on The Crazy Programmer.
0 notes
Link
(Via: Hacker News)
The vchan protocol is used to stream data between virtual machines on a Xen host without needing any locks. It is largely undocumented. The TLA Toolbox is a set of tools for writing and checking specifications. In this post, I’ll describe my experiences using these tools to understand how the vchan protocol works.
Table of Contents
( this post also appeared on Reddit )
Background
Qubes and the vchan protocol
I run QubesOS on my laptop. A QubesOS desktop environment is made up of multiple virtual machines. A privileged VM, called dom0, provides the desktop environment and coordinates the other VMs. dom0 doesn’t have network access, so you have to use other VMs for doing actual work. For example, I use one VM for email and another for development work (these are called “application VMs”). There is another VM (called sys-net) that connects to the physical network, and yet another VM (sys-firewall) that connects the application VMs to net-vm.
My QubesOS desktop. The windows with blue borders are from my Debian development VM, while the green one is from a Fedora VM, etc.
The default sys-firewall is based on Fedora Linux. A few years ago, I replaced sys-firewall with a MirageOS unikernel. MirageOS is written in OCaml, and has very little C code (unlike Linux). It boots much faster and uses much less RAM than the Fedora-based VM. But recently, a user reported that restarting mirage-firewall was taking a very long time. The problem seemed to be that it was taking several minutes to transfer the information about the network configuration to the firewall. This is sent over vchan. The user reported that stracing the QubesDB process in dom0 revealed that it was sleeping for 10 seconds between sending the records, suggesting that a wakeup event was missing.
The lead developer of QubesOS said:
I’d guess missing evtchn trigger after reading/writing data in vchan.
Perhaps ocaml-vchan, the OCaml implementation of vchan, wasn’t implementing the vchan specification correctly? I wanted to check, but there was a problem: there was no vchan specification.
The Xen wiki lists vchan under Xen Document Days/TODO. The initial Git commit on 2011-10-06 said:
libvchan: interdomain communications library
This library implements a bidirectional communication interface between applications in different domains, similar to unix sockets. Data can be sent using the byte-oriented libvchan_read/libvchan_write or the packet-oriented libvchan_recv/libvchan_send.
Channel setup is done using a client-server model; domain IDs and a port number must be negotiated prior to initialization. The server allocates memory for the shared pages and determines the sizes of the communication rings (which may span multiple pages, although the default places rings and control within a single page).
With properly sized rings, testing has shown that this interface provides speed comparable to pipes within a single Linux domain; it is significantly faster than network-based communication.
I looked in the xen-devel mailing list around this period in case the reviewers had asked about how it worked.
One reviewer suggested:
Please could you say a few words about the functionality this new library enables and perhaps the design etc? In particular a protocol spec would be useful for anyone who wanted to reimplement for another guest OS etc. […] I think it would be appropriate to add protocol.txt at the same time as checking in the library.
However, the submitter pointed out that this was unnecessary, saying:
The comments in the shared header file explain the layout of the shared memory regions; any other parts of the protocol are application-defined.
Now, ordinarily, I wouldn’t be much interested in spending my free time tracking down race conditions in 3rd-party libraries for the benefit of strangers on the Internet. However, I did want to have another play with TLA…
TLA+
TLA+ is a language for specifying algorithms. It can be used for many things, but it is particularly designed for stateful parallel algorithms.
I learned about TLA while working at Docker. Docker EE provides software for managing large clusters of machines. It includes various orchestrators (SwarmKit, Kubernetes and Swarm Classic) and a web UI. Ensuring that everything works properly is very important, and to this end a large collection of tests had been produced. Part of my job was to run these tests. You take a test from a list in a web UI and click whatever buttons it tells you to click, wait for some period of time, and then check that what you see matches what the test says you should see. There were a lot of these tests, and they all had to be repeated on every supported platform, and for every release, release candidate or preview release. There was a lot of waiting involved and not much thinking required, so to keep my mind occupied, I started reading the TLA documentation.
I read The TLA+ Hyperbook and Specifying Systems. Both are by Leslie Lamport (the creator of TLA), and are freely available online. They’re both very easy to read. The hyperbook introduces the tools right away so you can start playing, while Specifying Systems starts with more theory and discusses the tools later. I think it’s worth reading both.
Once Docker EE 2.0 was released, we engineers were allowed to spend a week on whatever fun (Docker-related) project we wanted. I used the time to read the SwarmKit design documents and make a TLA model of that. I felt that using TLA prompted useful discussions with the SwarmKit developers (which can see seen in the pull request comments).
A specification document can answer questions such as:
What does it do? (requirements / properties)
How does it do it? (the algorithm)
Does it work? (model checking)
Why does it work? (inductive invariant)
Does it really work? (proofs)
You don’t have to answer all of them to have a useful document, but I will try to answer each of them for vchan.
Is TLA useful?
In my (limited) experience with TLA, whenever I have reached the end of a specification (whether reading it or writing it), I always find myself thinking “Well, that was obvious. It hardly seems worth writing a spec for that!”. You might feel the same after reading this blog post.
To judge whether TLA is useful, I suggest you take a few minutes to look at the code. If you are good at reading C code then you might find, like the Xen reviewers, that it is quite obvious what it does, how it works, and why it is correct. Or, like me, you might find you’d prefer a little help. You might want to jot down some notes about it now, to see whether you learn anything new.
To give the big picture:
Two VMs decide to communicate over vchan. One will be the server and the other the client.
The server allocates three chunks of memory: one to hold data in transit from the client to the server, one for data going from server to client, and the third to track information about the state of the system. This includes counters saying how much data has been written and how much read, in each direction.
The server tells Xen to grant the client access to this memory.
The client asks Xen to map the memory into its address space. Now client and server can both access it at once. There are no locks in the protocol, so be careful!
Either end sends data by writing it into the appropriate buffer and updating the appropriate counter in the shared block. The buffers are ring buffers, so after getting to the end, you start again from the beginning.
The data-written (producer) counter and the data-read (consumer) counter together tell you how much data is in the buffer, and where it is. When the difference is zero, the reader must stop reading and wait for more data. When the difference is the size of the buffer, the writer must stop writing and wait for more space.
When one end is waiting, the other can signal it using a Xen event channel. This essentially sets a pending flag to true at the other end, and wakes the VM if it is sleeping. If a VM tries to sleep while it has an event pending, it will immediately wake up again. Sending an event when one is already pending has no effect.
The public/io/libxenvchan.h header file provides some information, including the shared structures and comments about them:
xen/include/public/io/libxenvchan.h
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
struct ring_shared { uint32_t cons, prod; }; #define VCHAN_NOTIFY_WRITE 0x1 #define VCHAN_NOTIFY_READ 0x2 /** * vchan_interface: primary shared data structure */ struct vchan_interface { /** * Standard consumer/producer interface, one pair per buffer * left is client write, server read * right is client read, server write */ struct ring_shared left, right; /** * size of the rings, which determines their location * 10 - at offset 1024 in ring's page * 11 - at offset 2048 in ring's page * 12+ - uses 2^(N-12) grants to describe the multi-page ring * These should remain constant once the page is shared. * Only one of the two orders can be 10 (or 11). */ uint16_t left_order, right_order; /** * Shutdown detection: * 0: client (or server) has exited * 1: client (or server) is connected * 2: client has not yet connected */ uint8_t cli_live, srv_live; /** * Notification bits: * VCHAN_NOTIFY_WRITE: send notify when data is written * VCHAN_NOTIFY_READ: send notify when data is read (consumed) * cli_notify is used for the client to inform the server of its action */ uint8_t cli_notify, srv_notify; /** * Grant list: ordering is left, right. Must not extend into actual ring * or grow beyond the end of the initial shared page. * These should remain constant once the page is shared, to allow * for possible remapping by a client that restarts. */ uint32_t grants[0]; };
You might also like to look at the vchan source code. Note that the libxenvchan.h file in this directory includes and extends the above header file (with the same name).
For this blog post, we will ignore the Xen-specific business of sharing the memory and telling the client where it is, and assume that the client has mapped the memory and is ready to go.
Basic TLA concepts
We’ll take a first look at TLA concepts and notation using a simplified version of vchan. TLA comes with excellent documentation, so I won’t try to make this a full tutorial, but hopefully you will be able to follow the rest of this blog post after reading it. We will just consider a single direction of the channel (e.g. client-to-server) here.
Variables, states and behaviour
A variable in TLA is just what a programmer expects: something that changes over time. For example, I’ll use Buffer to represent the data currently being transmitted.
We can also add variables that are just useful for the specification. I use Sent to represent everything the sender-side application asked the vchan library to transmit, and Got for everything the receiving application has received:
1
VARIABLES Got, Buffer, Sent
A state in TLA represents a snapshot of the world at some point. It gives a value for each variable. For example, { Got: "H", Buffer: "i", Sent: "Hi", ... } is a state. The ... is just a reminder that a state also includes everything else in the world, not just the variables we care about.
Here are some more states:
State Got Buffer Sent s0 s1 H H s2 H H s3 H i Hi s4 Hi Hi s5 iH Hi
A behaviour is a sequence of states, representing some possible history of the world. For example, << s0, s1, s2, s3, s4 >> is a behaviour. So is << s0, s1, s5 >>, but not one we want. The basic idea in TLA is to specify precisely which behaviours we want and which we don’t want.
A state expression is an expression that can be evaluated in the context of some state. For example, this defines Integrity to be a state expression that is true whenever what we have got so far matches what we wanted to send:
1 2 3 4 5 6 7 8
(* Take(m, i) is just the first i elements of message m. *) Take(m, i) == SubSeq(m, 1, i) (* Everything except the first i elements of message m. *) Drop(m, i) == SubSeq(m, i + 1, Len(m)) Integrity == Take(Sent, Len(Got)) = Got
Integrity is true for all the states above except for s5. I added some helper operators Take and Drop here. Sequences in TLA+ can be confusing because they are indexed from 1 rather than from 0, so it is easy to make off-by-one errors. These operators just use lengths, which we can all agree on. In Python syntax, it would be written something like:
1 2
def Integrity(s): return s.Sent.starts_with(s.Got)
A temporal formula is an expression that is evaluated in the context of a complete behaviour. It can use the temporal operators, which include:
[] (that’s supposed to look like a square) : “always”
<> (that’s supposed to look like a diamond) : “eventually”
[] F is true if the expression F is true at every point in the behaviour. <> F is true if the expression F is true at any point in the behaviour.
Messages we send should eventually arrive. Here’s one way to express that:
1 2 3
Availability == \A x \in Nat : [] (Len(Sent) = x => <> (Len(Got) >= x) )
TLA syntax is a bit odd. It’s rather like LaTeX (which is not surprising: Lamport is also the “La” in LaTeX). \A means “for all” (rendered as an upside-down A). So this says that for every number x, it is always true that if we have sent x bytes then eventually we will have received at least x bytes.
This pattern of [] (F => <>G) is common enough that it has a shorter notation of F ~> G, which is read as “F (always) leads to G”. So, Availability can also be written as:
1 2 3
Availability == \A x \in Nat : Len(Sent) = x ~> Len(Got) >= x
We’re only checking the lengths in Availability, but combined with Integrity that’s enough to ensure that we eventually receive what we want. So ideally, we’d like to ensure that every possible behaviour of the vchan library will satisfy the temporal formula Properties:
1 2
Properties == Availability /\ []Integrity
That /\ is “and” by the way, and \/ is “or”. I did eventually start to be able to tell one from the other, though I still think && and || would be easier. In case I forget to explain some syntax, A Summary of TLA lists most of it.
Actions
It is hopefully easy to see that Properties defines properties we want. A user of vchan would be happy to see that these are things they can rely on. But they don’t provide much help to someone trying to implement vchan. For that, TLA provides another way to specify behaviours.
An action in TLA is an expression that is evaluated in the context of a pair of states, representing a single atomic step of the system. For example:
1 2 3 4 5
Read == /\ Len(Buffer) > 0 /\ Got' = Got \o Buffer /\ Buffer' = << >> /\ UNCHANGED Sent
The Read action is true of a step if that step transfers all the data from Buffer to Got. Unprimed variables (e.g. Buffer) refer to the current state and primed ones (e.g. Buffer') refer to the next state. There’s some more strange notation here too:
We’re using /\ to form a bulleted list here rather than as an infix operator. This is indentation-sensitive. TLA also supports \/ lists in the same way.
\o is sequence concatenation (+ in Python).
<< >> is the empty sequence ([ ] in Python).
UNCHANGED Sent means Sent' = Sent.
In Python, it might look like this:
1 2 3 4 5
def Read(current, next): return Len(current.Buffer) > 0 \ and next.Got = current.Got + current.Buffer \ and next.Buffer = [] \ and next.Sent = current.Sent
Actions correspond more closely to code than temporal formulas, because they only talk about how the next state is related to the current one.
This action only allows one thing: reading the whole buffer at once. In the C implementation of vchan the receiving application can provide a buffer of any size and the library will read at most enough bytes to fill the buffer. To model that, we will need a slightly more flexible version:
1 2 3 4 5
Read == \E n \in 1..Len(Buffer) : /\ Got' = Got \o Take(Buffer, n) /\ Buffer' = Drop(Buffer, n) /\ UNCHANGED Sent
This says that a step is a Read step if there is any n (in the range 1 to the length of the buffer) such that we transferred n bytes from the buffer. \E means “there exists …”.
A Write action can be defined in a similar way:
1 2 3 4 5 6 7 8 9
CONSTANT BufferSize Byte == 0..255 Write == \E m \in Seq(Byte) \ {<< >>} : /\ Buffer' = Buffer \o m /\ Len(Buffer') <= BufferSize /\ Sent' = Sent \o m /\ UNCHANGED Got
A CONSTANT defines a parameter (input) of the specification (it’s constant in the sense that it doesn’t change between states). A Write operation adds some message m to the buffer, and also adds a copy of it to Sent so we can talk about what the system is doing. Seq(Byte) is the set of all possible sequences of bytes, and \ {<< >>} just excludes the empty sequence.
A step of the combined system is either a Read step or a Write step:
1 2
Next == Read \/ Write
We also need to define what a valid starting state for the algorithm looks like:
1 2 3 4
Init == /\ Sent = << >> /\ Buffer = << >> /\ Got = << >>
Finally, we can put all this together to get a temporal formula for the algorithm:
1 2 3 4
vars == << Got, Buffer, Sent >> Spec == Init /\ [][Next]_vars
Some more notation here:
[Next]_vars (that’s Next in brackets with a subscript vars) means Next \/ UNCHANGED vars.
Using Init (a state expression) in a temporal formula means it must be true for the first state of the behaviour.
[][Action]_vars means that [Action]_vars must be true for each step.
TLA syntax requires the _vars subscript here. This is because other things can be going on in the world beside our algorithm, so it must always be possible to take a step without our algorithm doing anything.
Spec defines behaviours just like Properties does, but in a way that makes it more obvious how to implement the protocol.
Correctness of Spec
Now we have definitions of Spec and Properties, it makes sense to check that every behaviour of Spec satisfies Properties. In Python terms, we want to check that all behaviours b satisfy this:
1 2
def SpecOK(b): return Spec(b) = False or Properties(b)
i.e. either b isn’t a behaviour that could result from the actions of our algorithm or, if it is, it satisfies Properties. In TLA notation, we write this as:
1 2
SpecOK == Spec => Properties
It’s OK if a behaviour is allowed by Properties but not by Spec. For example, the behaviour which goes straight from Got="", Sent="" to Got="Hi", Sent="Hi" in one step meets our requirements, but it’s not a behaviour of Spec.
The real implementation may itself further restrict Spec. For example, consider the behaviour << s0, s1, s2 >>:
State Got Buffer Sent s0 Hi Hi s1 H i Hi s2 Hi Hi
The sender sends two bytes at once, but the reader reads them one at a time. This is a behaviour of the C implementation, because the reading application can ask the library to read into a 1-byte buffer. However, it is not a behaviour of the OCaml implementation, which gets to choose how much data to return to the application and will return both bytes together.
That’s fine. We just need to show that OCamlImpl => Spec and Spec => Properties and we can deduce that OCamlImpl => Properties. This is, of course, the key purpose of a specification: we only need to check that each implementation implements the specification, not that each implementation directly provides the desired properties.
It might seem strange that an implementation doesn’t have to allow all the specified behaviours. In fact, even the trivial specification Spec == FALSE is considered to be a correct implementation of Properties, because it has no bad behaviours (no behaviours at all). But that’s OK. Once the algorithm is running, it must have some behaviour, even if that behaviour is to do nothing. As the user of the library, you are responsible for checking that you can use it (e.g. by ensuring that the Init conditions are met). An algorithm without any behaviours corresponds to a library you could never use, not to one that goes wrong once it is running.
The model checker
Now comes the fun part: we can ask TLC (the TLA model checker) to check that Spec => Properties. You do this by asking the toolbox to create a new model (I called mine SpecOK) and setting Spec as the “behaviour spec”. It will prompt for a value for BufferSize. I used 2. There will be various things to fix up:
To check Write, TLC first tries to get every possible Seq(Byte), which is an infinite set. I defined MSG == Seq(Byte) and changed Write to use MSG. I then added an alternative definition for MSG in the model so that we only send messages of limited length. In fact, my replacement MSG ensures that Sent will always just be an incrementing sequence (<< 1, 2, 3, ... >>). That’s enough to check Properties, and much quicker than checking every possible message.
The system can keep sending forever. I added a state constraint to the model: Len(Sent) < 4 This tells TLC to stop considering any execution once this becomes false.
With that, the model runs successfully. This is a nice feature of TLA: instead of changing our specification to make it testable, we keep the specification correct and just override some aspects of it in the model. So, the specification says we can send any message, but the model only checks a few of them.
Now we can add Integrity as an invariant to check. That passes, but it’s good to double-check by changing the algorithm. I changed Read so that it doesn’t clear the buffer, using Buffer' = Drop(Buffer, 0) (with 0 instead of n). Then TLC reports a counter-example (“Invariant Integrity is violated”):
The sender writes << 1, 2 >> to Buffer.
The reader reads one byte, to give Got=1, Buffer=12, Sent=12.
The reader reads another byte, to give Got=11, Buffer=12, Sent=12.
Looks like it really was checking what we wanted. It’s good to be careful. If we’d accidentally added Integrity as a “property” to check rather than as an “invariant” then it would have interpreted it as a temporal formula and reported success just because it is true in the initial state.
One really nice feature of TLC is that (unlike a fuzz tester) it does a breadth-first search and therefore finds minimal counter-examples for invariants. The example above is therefore the quickest way to violate Integrity.
Checking Availability complains because of the use of Nat (we’re asking it to check for every possible length). I replaced the Nat with AvailabilityNat and overrode that to be 0..4 in the model. It then complains “Temporal properties were violated” and shows an example where the sender wrote some data and the reader never read it.
The problem is, [Next]_vars always allows us to do nothing. To fix this, we can specify a “weak fairness” constraint. WF_vars(action), says that we can’t just stop forever with action being always possible but never happening. I updated Spec to require the Read action to be fair:
1
Spec == Init /\ [][Next]_vars /\ WF_vars(Read)
Again, care is needed here. If we had specified WF_vars(Next) then we would be forcing the sender to keep sending forever, which users of vchan are not required to do. Worse, this would mean that every possible behaviour of the system would result in Sent growing forever. Every behaviour would therefore hit our Len(Sent) < 4 constraint and TLC wouldn’t consider it further. That means that TLC would never check any actual behaviour against Availability, and its reports of success would be meaningless! Changing Read to require n \in 2..Len(Buffer) is a quick way to see that TLC is actually checking Availability.
Here’s the complete spec so far: vchan1.pdf (source)
The real vchan
The simple Spec algorithm above has some limitations. One obvious simplification is that Buffer is just the sequence of bytes in transit, whereas in the real system it is a ring buffer, made up of an array of bytes along with the producer and consumer counters. We could replace it with three separate variables to make that explicit. However, ring buffers in Xen are well understood and I don’t feel that it would make the specification any clearer to include that.
A more serious problem is that Spec assumes that there is a way to perform the Read and Write operations atomically. Otherwise the real system would have behaviours not covered by the spec. To implement the above Spec correctly, you’d need some kind of lock. The real vchan protocol is more complicated than Spec, but avoids the need for a lock.
The real system has more shared state than just Buffer. I added extra variables to the spec for each item of shared state in the C code, along with its initial value:
SenderLive = TRUE (sender sets to FALSE to close connection)
ReceiverLive = TRUE (receiver sets to FALSE to close connection)
NotifyWrite = TRUE (receiver wants to be notified of next write)
DataReadyInt = FALSE (sender has signalled receiver over event channel)
NotifyRead = FALSE (sender wants to be notified of next read)
SpaceAvailableInt = FALSE (receiver has notified sender over event channel)
DataReadyInt represents the state of the receiver’s event port. The sender can make a Xen hypercall to set this and wake (or interrupt) the receiver. I guess sending these events is somewhat slow, because the NotifyWrite system is used to avoid sending events unnecessarily. Likewise, SpaceAvailableInt is the sender’s event port.
The algorithm
Here is my understanding of the protocol. On the sending side:
The sending application asks to send some bytes. We check whether the receiver has closed the channel and abort if so.
We check the amount of buffer space available.
If there isn’t enough, we set NotifyRead so the receiver will notify us when there is more. We also check the space again after this, in case it changed while setting the flag.
If there is any space:
We write as much data as we can to the buffer.
If the NotifyWrite flag is set, we clear it and notify the receiver of the write.
If we wrote everything, we return success.
Otherwise, we wait to be notified of more space.
We check whether the receiver has closed the channel. If so we abort. Otherwise, we go back to step 2.
On the receiving side:
The receiving application asks us to read up to some amount of data.
We check the amount of data available in the buffer.
If there isn’t as much as requested, we set NotifyWrite so the sender will notify us when there is. We also check the space again after this, in case it changed while setting the flag.
If there is any data, we read up to the amount requested. If the NotifyRead flag is set, we clear it and notify the sender of the new space. We return success to the application (even if we didn’t get as much as requested).
Otherwise (if there was no data), we check whether the sender has closed the connection.
If not (if the connection is still open), we wait to be notified of more data, and then go back to step 2.
Either side can close the connection by clearing their “live” flag and signalling the other side. I assumed there is also some process-local way that the close operation can notify its own side if it’s currently blocked.
To make expressing this kind of step-by-step algorithm easier, TLA+ provides a programming-language-like syntax called PlusCal. It then translates PlusCal into TLA actions.
Confusingly, there are two different syntaxes for PlusCal: Pascal style and C style. This means that, when you search for examples on the web, there is a 50% chance they won’t work because they’re using the other flavour. I started with the Pascal one because that was the first example I found, but switched to C-style later because it was more compact.
Here is my attempt at describing the sender algorithm above in PlusCal:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
fair process (SenderWrite = SenderWriteID) variables free = 0, \* Our idea of how much free space is available. msg = << >>, \* The data we haven't sent yet. Sent = << >>; \* Everything we were asked to send. { sender_ready:- while (TRUE) { if (~SenderLive \/ ~ReceiverLive) goto Done else { with (m \in MSG) { msg := m }; Sent := Sent \o msg; \* Remember we wanted to send this }; sender_write: while (TRUE) { free := BufferSize - Len(Buffer); sender_request_notify: if (free >= Len(msg)) goto sender_write_data else NotifyRead := TRUE; sender_recheck_len: free := BufferSize - Len(Buffer); sender_write_data: if (free > 0) { Buffer := Buffer \o Take(msg, Min(Len(msg), free)); msg := Drop(msg, Min(Len(msg), free)); free := 0; sender_check_notify_data: if (NotifyWrite) { NotifyWrite := FALSE; \* Atomic test-and-clear sender_notify_data: DataReadyInt := TRUE; \* Signal receiver if (msg = << >>) goto sender_ready } else if (msg = << >>) goto sender_ready }; sender_blocked: await SpaceAvailableInt \/ ~SenderLive; if (~SenderLive) goto Done; else SpaceAvailableInt := FALSE; sender_check_recv_live: if (~ReceiverLive) goto Done; } } }
The labels (e.g. sender_request_notify:) represent points in the program where other actions can happen. Everything between two labels is considered to be atomic. I checked that every block of code between labels accesses only one shared variable. This means that the real system can’t see any states that we don’t consider. The toolbox doesn’t provide any help with this; you just have to check manually.
The sender_ready label represents a state where the client application hasn’t yet decided to send any data. Its label is tagged with - to indicate that fairness doesn’t apply here, because the protocol doesn’t require applications to keep sending more data forever. The other steps are fair, because once we’ve decided to send something we should keep going.
Taking a step from sender_ready to sender_write corresponds to the vchan library’s write function being called with some argument m. The with (m \in MSG) says that m could be any message from the set MSG. TLA also contains a CHOOSE operator that looks like it might do the same thing, but it doesn’t. When you use with, you are saying that TLC should check all possible messages. When you use CHOOSE, you are saying that it doesn’t matter which message TLC tries (and it will always try the same one). Or, in terms of the specification, a CHOOSE would say that applications can only ever send one particular message, without telling you what that message is.
In sender_write_data, we set free := 0 for no obvious reason. This is just to reduce the number of states that the model checker needs to explore, since we don’t care about its value after this point.
Some of the code is a little awkward because I had to put things in else branches that would more naturally go after the whole if block, but the translator wouldn’t let me do that. The use of semi-colons is also a bit confusing: the PlusCal-to-TLA translator requires them after a closing brace in some places, but the PDF generator messes up the indentation if you include them.
Here’s how the code block starting at sender_request_notify gets translated into a TLA action:
1 2 3 4 5 6 7 8 9 10 11
sender_request_notify == /\ pc[SenderWriteID] = "sender_request_notify" /\ IF free >= Len(msg) THEN /\ pc' = [pc EXCEPT ![SenderWriteID] = "sender_write_data"] /\ UNCHANGED NotifyRead ELSE /\ NotifyRead' = TRUE /\ pc' = [pc EXCEPT ![SenderWriteID] = "sender_recheck_len"] /\ UNCHANGED << SenderLive, ReceiverLive, Buffer, NotifyWrite, DataReadyInt, SpaceAvailableInt, free, msg, Sent, have, want, Got >>
pc is a mapping from process ID to the label where that process is currently executing. So sender_request_notify can only be performed when the SenderWriteID process is at the sender_request_notify label. Afterwards pc[SenderWriteID] will either be at sender_write_data or sender_recheck_len (if there wasn’t enough space for the whole message).
Here’s the code for the receiver:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
fair process (ReceiverRead = ReceiverReadID) variables have = 0, \* The amount of data we think the buffer contains. want = 0, \* The amount of data the user wants us to read. Got = << >>; \* Pseudo-variable recording all data ever received by receiver. { recv_ready: while (ReceiverLive) { with (n \in 1..MaxReadLen) want := n; recv_reading: while (TRUE) { have := Len(Buffer); recv_got_len: if (have >= want) goto recv_read_data else NotifyWrite := TRUE; recv_recheck_len: have := Len(Buffer); recv_read_data: if (have > 0) { Got := Got \o Take(Buffer, Min(want, have)); Buffer := Drop(Buffer, Min(want, have)); want := 0; have := 0; recv_check_notify_read: if (NotifyRead) { NotifyRead := FALSE; \* (atomic test-and-clear) recv_notify_read: SpaceAvailableInt := TRUE; goto recv_ready; \* Return success } else goto recv_ready; \* Return success } else if (~SenderLive \/ ~ReceiverLive) { goto Done; }; recv_await_data: await DataReadyInt \/ ~ReceiverLive; if (~ReceiverLive) { want := 0; goto Done } else DataReadyInt := FALSE; } } }
It’s quite similar to before. recv_ready corresponds to a state where the application hasn’t yet called read. When it does, we take n (the maximum number of bytes to read) as an argument and store it in the local variable want.
Note: you can use the C library in blocking or non-blocking mode. In blocking mode, a write (or read) waits until data is sent (or received). In non-blocking mode, it returns a special code to the application indicating that it needs to wait. The application then does the waiting itself and then calls the library again. I think the specification above covers both cases, depending on whether you think of sender_blocked and recv_await_data as representing code inside or outside of the library.
We also need a way to close the channel. It wasn’t clear to me, from looking at the C headers, when exactly you’re allowed to do that. I think that if you had a multi-threaded program and you called the close function while the write function was blocked, it would unblock and return. But if you happened to call it at the wrong time, it would try to use a closed file descriptor and fail (or read from the wrong one). So I guess it’s single threaded, and you should use the non-blocking mode if you want to cancel things.
That means that the sender can close only when it is at sender_ready or sender_blocked, and similarly for the receiver. The situation with the OCaml code is the same, because it is cooperatively threaded and so the close operation can only be called while blocked or idle. However, I decided to make the specification more general and allow for closing at any point by modelling closing as separate processes:
1 2 3 4 5 6 7 8 9
fair process (SenderClose = SenderCloseID) { sender_open:- SenderLive := FALSE; \* Clear liveness flag sender_notify_closed: DataReadyInt := TRUE; \* Signal receiver } fair process (ReceiverClose = ReceiverCloseID) { recv_open:- ReceiverLive := FALSE; \* Clear liveness flag recv_notify_closed: SpaceAvailableInt := TRUE; \* Signal sender }
Again, the processes are “fair” because once we start closing we should finish, but the initial labels are tagged with “-“ to disable fairness there: it’s OK if you keep a vchan open forever.
There’s a slight naming problem here. The PlusCal translator names the actions it generates after the starting state of the action. So sender_open is the action that moves from the sender_open label. That is, the sender_open action actually closes the connection!
Finally, we share the event channel with the buffer going in the other direction, so we might get notifications that are nothing to do with us. To ensure we handle that, I added another process that can send events at any time:
1 2 3 4 5 6
process (SpuriousInterrupts = SpuriousID) { spurious: while (TRUE) { either SpaceAvailableInt := TRUE or DataReadyInt := TRUE } }
either/or says that we need to consider both possibilities. This process isn’t marked fair, because we can’t rely these interrupts coming. But we do have to handle them when they happen.
Testing the full spec
PlusCal code is written in a specially-formatted comment block, and you have to press Ctrl-T to generate (or update) then TLA translation before running the model checker.
Be aware that the TLA Toolbox is a bit unreliable about keyboard short-cuts. While typing into the editor always works, short-cuts such as Ctrl-S (save) sometimes get disconnected. So you think you’re doing “edit/save/translate/save/check” cycles, but really you’re just checking some old version over and over again. You can avoid this by always running the model checker with the keyboard shortcut too, since that always seems to fail at the same time as the others. Focussing a different part of the GUI and then clicking back in the editor again fixes everything for a while.
Anyway, running our model on the new spec shows that Integrity is still OK. However, the Availability check fails with the following counter-example:
The sender writes << 1 >> to Buffer.
The sender closes the connection.
The receiver closes the connection.
All processes come to a stop, but the data never arrived.
We need to update Availability to consider the effects of closing connections. And at this point, I’m very unsure what vchan is intended to do. We could say:
1 2 3 4 5
Availability == \A x \in AvailabilityNat : Len(Sent) = x ~> \/ Len(Got) >= x \/ ~ReceiverLive \/ ~SenderLive
That passes. But vchan describes itself as being like a Unix socket. If you write to a Unix socket and then close it, you still expect the data to be delivered. So actually I tried this:
1 2 3 4 5
Availability == \A x \in AvailabilityNat : x = Len(Sent) /\ SenderLive /\ pc[SenderWriteID] = "sender_ready" ~> \/ Len(Got) >= x \/ ~ReceiverLive
This says that if a sender write operation completes successfully (we’re back at sender_ready) and at that point the sender hasn’t closed the connection, then the receiver will eventually receive the data (or close its end).
That is how I would expect it to behave. But TLC reports that the new spec does not satisfy this, giving this example (simplified - there are 16 steps in total):
The receiver starts reading. It finds that the buffer is empty.
The sender writes some data to Buffer and returns to sender_ready.
The sender closes the channel.
The receiver sees that the connection is closed and stops.
Is this a bug? Without a specification, it’s impossible to say. Maybe vchan was never intended to ensure delivery once the sender has closed its end. But this case only happens if you’re very unlucky about the scheduling. If the receiving application calls read when the sender has closed the connection but there is data available then the C code does return the data in that case. It’s only if the sender happens to close the connection just after the receiver has checked the buffer and just before it checks the close flag that this happens.
It’s also easy to fix. I changed the code in the receiver to do a final check on the buffer before giving up:
1 2 3 4
} else if (~SenderLive \/ ~ReceiverLive) { recv_final_check: if (Len(Buffer) = 0) { want := 0; goto Done } else goto recv_reading; }
With that change, we can be sure that data sent while the connection is open will always be delivered (provided only that the receiver doesn’t close the connection itself). If you spotted this issue yourself while you were reviewing the code earlier, then well done!
Note that when TLC finds a problem with a temporal property (such as Availability), it does not necessarily find the shortest example first. I changed the limit on Sent to Len(Sent) < 2 and added an action constraint of ~SpuriousInterrupts to get a simpler example, with only 1 byte being sent and no spurious interrupts.
Some odd things
I noticed a couple of other odd things, which I thought I’d mention.
First, NotifyWrite is initialised to TRUE, which seemed unnecessary. We can initialise it to FALSE instead and everything still works. We can even initialise it with NotifyWrite \in {TRUE, FALSE} to allow either behaviour, and thus test that old programs that followed the original version of the spec still work with either behaviour.
That’s a nice advantage of using a specification language. Saying “the code is the spec” becomes less useful as you build up more and more versions of the code!
However, because there was no spec before, we can’t be sure that existing programs do follow it. And, in fact, I found that QubesDB uses the vchan library in a different and unexpected way. Instead of calling read, and then waiting if libvchan says to, QubesDB blocks first in all cases, and then calls the read function once it gets an event.
We can document that by adding an extra step at the start of ReceiverRead:
1 2 3 4 5
recv_init: either goto recv_ready \* (recommended) or { \* (QubesDB does this) with (n \in 1..MaxReadLen) want := n; goto recv_await_data; };
Then TLC shows that NotifyWrite cannot start as FALSE.
The second odd thing is that the receiver sets NotifyRead whenever there isn’t enough data available to fill the application’s buffer completely. But usually when you do a read operation you just provide a buffer large enough for the largest likely message. It would probably make more sense to set NotifyWrite only when the buffer is completely empty. After checking the current version of the algorithm, I changed the specification to allow either behaviour.
Why does vchan work?
At this point, we have specified what vchan should do and how it does it. We have also checked that it does do this, at least for messages up to 3 bytes long with a buffer size of 2. That doesn’t sound like much, but we still checked 79,288 distinct states, with behaviours up to 38 steps long. This would be a perfectly reasonable place to declare the specification (and blog post) finished.
However, TLA has some other interesting abilities. In particular, it provides a very interesting technique to help discover why the algorithm works.
We’ll start with Integrity. We would like to argue as follows:
Integrity is true in any initial state (i.e. Init => Integrity).
Any Next step preserves Integrity (i.e. Integrity /\ Next => Integrity').
Then it would just be a matter looking at each possible action that makes up Next and checking that each one individually preserves Integrity. However, we can’t do this with Integrity because (2) isn’t true. For example, the state { Got: "", Buffer: "21", Sent: "12" } satisfies Integrity, but if we take a read step then the new state won’t. Instead, we have to argue “If we take a Next step in any reachable state then Integrity'”, but that’s very difficult because how do we know whether a state is reachable without searching them all?
So the idea is to make a stronger version of Integrity, called IntegrityI, which does what we want. IntegrityI is called an inductive invariant. The first step is fairly obvious - I began with:
1 2
IntegrityI == Sent = Got \o Buffer \o msg
Integrity just said that Got is a prefix of Sent. This says specifically that the rest is Buffer \o msg - the data currently being transmitted and the data yet to be transmitted.
We can ask TLC to check Init /\ [][Next]_vars => []IntegrityI to check that it is an invariant, as before. It does that by finding all the Init states and then taking Next steps to find all reachable states. But we can also ask it to check IntegrityI /\ [][Next]_vars => []IntegrityI. That is, the same thing but starting from any state matching IntegrityI instead of Init.
I created a new model (IntegrityI) to do that. It reports a few technical problems at the start because it doesn’t know the types of anything. For example, it can’t choose initial values for SenderLive without knowing that SenderLive is a boolean. I added a TypeOK state expression that gives the expected type of every variable:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
MESSAGE == Seq(Byte) FINITE_MESSAGE(L) == UNION ( { [ 1..N -> Byte ] : N \in 0..L } ) TypeOK == /\ Sent \in MESSAGE /\ Got \in MESSAGE /\ Buffer \in FINITE_MESSAGE(BufferSize) /\ SenderLive \in BOOLEAN /\ ReceiverLive \in BOOLEAN /\ NotifyWrite \in BOOLEAN /\ DataReadyInt \in BOOLEAN /\ NotifyRead \in BOOLEAN /\ SpaceAvailableInt \in BOOLEAN /\ free \in 0..BufferSize /\ msg \in FINITE_MESSAGE(MaxWriteLen) /\ want \in 0..MaxReadLen /\ have \in 0..BufferSize
We also need to tell it all the possible states of pc (which says which label each process it at):
1 2 3 4 5 6 7 8 9 10 11
PCOK == pc \in [ SW: {"sender_ready", "sender_write", "sender_request_notify", "sender_recheck_len", "sender_write_data", "sender_blocked", "sender_check_notify_data", "sender_notify_data", "sender_check_recv_live", "Done"}, SC: {"sender_open", "sender_notify_closed", "Done"}, RR: {"recv_init", "recv_ready", "recv_reading", "recv_got_len", "recv_recheck_len", "recv_read_data", "recv_final_check", "recv_await_data", "recv_check_notify_read", "recv_notify_read", "Done"}, RC: {"recv_open", "recv_notify_closed", "Done"}, SP: {"spurious"} ]
You might imagine that the PlusCal translator would generate that for you, but it doesn’t. We also need to override MESSAGE with FINITE_MESSAGE(n) for some n (I used 2). Otherwise, it can’t enumerate all possible messages. Now we have:
1 2 3 4
IntegrityI == /\ TypeOK /\ PCOK /\ Sent = Got \o Buffer \o msg
With that out of the way, TLC starts finding real problems (that is, examples showing that IntegrityI /\ Next => IntegrityI' isn’t true). First, recv_read_data would do an out-of-bounds read if have = 1 and Buffer = << >>. Our job is to explain why that isn’t a valid state. We can fix it with an extra constraint:
1 2 3 4 5
IntegrityI == /\ TypeOK /\ PCOK /\ Sent = Got \o Buffer \o msg /\ pc[ReceiverReadID] = "recv_read_data" => have <= Len(Buffer)
(note: that => is “implies”, while the <= is “less-than-or-equal-to”)
Now it complains that if we do recv_got_len with Buffer = << >>, have = 1, want = 0 then we end up in recv_read_data with Buffer = << >>, have = 1, and we have to explain why that can’t happen and so on.
Because TLC searches breadth-first, the examples it finds never have more than 2 states. You just have to explain why the first state can’t happen in the real system. Eventually, you get a big ugly pile of constraints, which you then think about for a bit and simply. I ended up with:
1 2 3 4 5 6 7 8 9 10
IntegrityI == /\ TypeOK /\ PCOK /\ Sent = Got \o Buffer \o msg /\ have <= Len(Buffer) /\ free <= BufferSize - Len(Buffer) /\ pc[SenderWriteID] \in {"sender_write", "sender_request_notify", "sender_recheck_len", "sender_write_data", "sender_blocked", "sender_check_recv_live"} => msg /= << >> /\ pc[SenderWriteID] \in {"sender_ready"} => msg = << >>
It’s a good idea to check the final IntegrityI with the original SpecOK model, just to check it really is an invariant.
So, in summary, Integrity is always true because:
Sent is always the concatenation of Got, Buffer and msg. That’s fairly obvious, because sender_ready sets msg and appends the same thing to Sent, and the other steps (sender_write_data and recv_read_data) just transfer some bytes from the start of one variable to the end of another.
Although, like all local information, the receiver’s have variable might be out-of-date, there must be at least that much data in the buffer, because the sender process will only have added more, not removed any. This is sufficient to ensure that we never do an out-of-range read.
Likewise, the sender’s free variable is a lower bound on the true amount of free space, because the receiver only ever creates more space. We will therefore never write beyond the free space.
I think this ability to explain why an algorithm works, by being shown examples where the inductive property doesn’t hold, is a really nice feature of TLA. Inductive invariants are useful as a first step towards writing a proof, but I think they’re valuable even on their own. If you’re documenting your own algorithm, this process will get you to explain your own reasons for believing it works (I tried it on a simple algorithm in my own code and it seemed helpful).
Some notes:
Originally, I had the free and have constraints depending on pc. However, the algorithm sets them to zero when not in use so it turns out they’re always true.
IntegrityI matches 532,224 states, even with a maximum Sent length of 1, but it passes! There are some games you can play to speed things up; see Using TLC to Check Inductive Invariance for some suggestions (I only discovered that while writing this up).
Proving Integrity
TLA provides a syntax for writing proofs, and integrates with TLAPS (the TLA+ Proof System) to allow them to be checked automatically.
Proving IntegrityI is just a matter of showing that Init => IntegrityI and that it is preserved by any possible [Next]_vars step. To do that, we consider each action of Next individually, which is long but simple enough.
I was able to prove it, but the recv_read_data action was a little difficult because we don’t know that want > 0 at that point, so we have to do some extra work to prove that transferring 0 bytes works, even though the real system never does that.
I therefore added an extra condition to IntegrityI that want is non-zero whenever it’s in use, and also conditions about have and free being 0 when not in use, for completeness:
1 2 3 4 5 6 7 8 9 10
IntegrityI == [...] /\ want = 0 <=> pc[ReceiverReadID] \in {"recv_check_notify_read", "recv_notify_read", "recv_init", "recv_ready", "recv_notify_read", "Done"} /\ \/ pc[ReceiverReadID] \in {"recv_got_len", "recv_recheck_len", "recv_read_data"} \/ have = 0 /\ \/ pc[SenderWriteID] \in {"sender_write", "sender_request_notify", "sender_recheck_len", "sender_write_data"} \/ free = 0
Availability
Integrity was quite easy to prove, but I had more trouble trying to explain Availability. One way to start would be to add Availability as a property to check to the IntegrityI model. However, it takes a while to check properties as it does them at the end, and the examples it finds may have several steps (it took 1m15s to find a counter-example for me).
Here’s a faster way (37s). The algorithm will deadlock if both sender and receiver are in their blocked states and neither interrupt is pending, so I made a new invariant, I, which says that deadlock can’t happen:
1 2 3 4 5 6
I == /\ IntegrityI /\ ~ /\ pc[SenderWriteID] = "sender_blocked" /\ ~SpaceAvailableInt /\ pc[ReceiverReadID] = "recv_await_data" /\ ~DataReadyInt
I discovered some obvious facts about closing the connection. For example, the SenderLive flag is set if and only if the sender’s close thread hasn’t done anything. I’ve put them all together in CloseOK:
1 2 3 4 5 6 7 8 9 10 11 12
(* Some obvious facts about shutting down connections. *) CloseOK == \* An endpoint is live iff its close thread hasn't done anything: /\ pc[SenderCloseID] = "sender_open" <=> SenderLive /\ pc[ReceiverCloseID] = "recv_open" <=> ReceiverLive \* The send and receive loops don't terminate unless someone has closed the connection: /\ pc[ReceiverReadID] \in {"recv_final_check", "Done"} => ~ReceiverLive \/ ~SenderLive /\ pc[SenderWriteID] \in {"Done"} => ~ReceiverLive \/ ~SenderLive \* If the receiver closed the connection then we will get (or have got) the signal: /\ pc[ReceiverCloseID] = "Done" => \/ SpaceAvailableInt \/ pc[SenderWriteID] \in {"sender_check_recv_live", "Done"}
But I had problems with other examples TLC showed me, and I realised that I didn’t actually know why this algorithm doesn’t deadlock.
Intuitively it seems clear enough: the sender puts data in the buffer when there’s space and notifies the receiver, and the receiver reads it and notifies the writer. What could go wrong? But both processes are working with information that can be out-of-date. By the time the sender decides to block because the buffer looked full, the buffer might be empty. And by the time the receiver decides to block because it looked empty, it might be full.
Maybe you already saw why it works from the C code, or the algorithm above, but it took me a while to figure it out! I eventually ended up with an invariant of the form:
1 2 3 4
I == .. /\ SendMayBlock => SpaceWakeupComing /\ ReceiveMayBlock => DataWakeupComing
SendMayBlock is TRUE if we’re in a state that may lead to being blocked without checking the buffer’s free space again. Likewise, ReceiveMayBlock indicates that the receiver might block. SpaceWakeupComing and DataWakeupComing predict whether we’re going to get an interrupt. The idea is that if we’re going to block, we need to be sure we’ll be woken up. It’s a bit ugly, though, e.g.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
DataWakeupComing == \/ DataReadyInt \* Event sent \/ pc[SenderWriteID] = "sender_notify_data" \* Event being sent \/ pc[SenderCloseID] = "sender_notify_closed" \/ pc[ReceiverCloseID] = "recv_notify_closed" \/ /\ NotifyWrite \* Event requested and ... /\ ReceiverLive \* Sender can see receiver is still alive and ... /\ \/ pc[SenderWriteID] = "sender_write_data" /\ free > 0 \/ pc[SenderWriteID] = "sender_check_notify_data" \/ pc[SenderWriteID] = "sender_recheck_len" /\ Len(Buffer) < BufferSize \/ pc[SenderWriteID] = "sender_ready" /\ SenderLive /\ Len(Buffer) < BufferSize \/ pc[SenderWriteID] = "sender_write" /\ Len(Buffer) < BufferSize \/ pc[SenderWriteID] = "sender_request_notify" /\ Len(Buffer) < BufferSize \/ SpaceWakeupComing /\ Len(Buffer) < BufferSize /\ SenderLive
It did pass my model that tested sending one byte, and I decided to try a proof. Well, it didn’t work. The problem seems to be that DataWakeupComing and SpaceWakeupComing are really mutually recursive. The reader will wake up if the sender wakes it, but the sender might be blocked, or about to block. That’s OK though, as long as the receiver will wake it, which it will do, once the sender wakes it…
You’ve probably already figured it out, but I thought I’d document my confusion. It occurred to me that although each process might have out-of-date information, that could be fine as long as at any one moment one of them was right. The last process to update the buffer must know how full it is, so one of them must have correct information at any given time, and that should be enough to avoid deadlock.
That didn’t work either. When you’re at a proof step and can’t see why it’s correct, you can ask TLC to show you an example. e.g. if you’re stuck trying to prove that sender_request_notify preserves I when the receiver is at recv_ready, the buffer is full, and ReceiverLive = FALSE, you can ask for an example of that:
1 2 3 4 5 6 7
Example == /\ PCOK /\ pc[SenderWriteID] = "sender_request_notify" /\ pc[ReceiverReadID] = "recv_ready" /\ ReceiverLive = FALSE /\ I /\ Len(Buffer) = BufferSize
You then create a new model that searches Example /\ [][Next]_vars and tests I. As long as Example has several constraints, you can use a much larger model for this. I also ask it to check the property [][FALSE]_vars, which means it will show any step starting from Example.
It quickly became clear what was wrong: it is quite possible that neither process is up-to-date. If both processes see the buffer contains X bytes of data, and the sender sends Y bytes and the receiver reads Z bytes, then the sender will think there are X + Y bytes in the buffer and the receiver will think there are X - Z bytes, and neither is correct. My original 1-byte buffer was just too small to find a counter-example.
The real reason why vchan works is actually rather obvious. I don’t know why I didn’t see it earlier. But eventually it occurred to me that I could make use of Got and Sent. I defined WriteLimit to be the total number of bytes that the sender would write before blocking, if the receiver never did anything further. And I defined ReadLimit to be the total number of bytes that the receiver would read if the sender never did anything else.
Did I define these limits correctly? It’s easy to ask TLC to check some extra properties while it’s running. For example, I used this to check that ReadLimit behaves sensibly:
1 2 3 4 5 6 7 8 9
ReadLimitCorrect == \* We will eventually receive what ReadLimit promises: /\ WF_vars(ReceiverRead) => \A i \in AvailabilityNat : ReadLimit = i ~> Len(Got) >= i \/ ~ReceiverLive \* ReadLimit can only decrease if we decide to shut down: /\ [][ReadLimit' >= ReadLimit \/ ~ReceiverLive]_vars \* ReceiverRead steps don't change the read limit: /\ [][ReceiverRead => UNCHANGED ReadLimit \/ ~ReceiverLive]_vars
Because ReadLimit is defined in terms of what it does when no other processes run, this property should ideally be tested in a model without the fairness conditions (i.e. just Init /\ [][Next]_vars). Otherwise, fairness may force the sender to perform a step. We still want to allow other steps, though, to show that ReadLimit is a lower bound.
With this, we can argue that e.g. a 2-byte buffer will eventually transfer 3 bytes:
The receiver will eventually read 3 bytes as long as the sender eventually sends 3 bytes.
The sender will eventually send 3, if the receiver reads at least 1.
The receiver will read 1 if the sender sends at least 1.
The sender will send 1 if the reader has read at least 0 bytes, which is always true.
By this point, I was learning to be more cautious before trying a proof, so I added some new models to check this idea further. One prevents the sender from ever closing the connection and the other prevents the receiver from ever closing. That reduces the number of states to consider and I was able to check a slightly larger model.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
I == /\ IntegrityI /\ CloseOK \* If the reader is stuck, but data is available, the sender will unblock it: /\ ReaderShouldBeUnblocked => \* The sender is going to write more: \/ WriteLimit > Len(Got) + Len(Buffer) /\ Len(msg) > 0 /\ SenderLive \* The sender is about to increase ReadLimit: \/ (\/ pc[SenderWriteID] = "sender_check_notify_data" /\ NotifyWrite \/ pc[SenderWriteID] = "sender_notify_data") /\ ReadLimit < Len(Got) + Len(Buffer) \* The sender is about to notify us of shutdown: \/ pc[SenderCloseID] \in {"sender_notify_closed"} \* If the writer is stuck, but there is now space available, the receiver will unblock it: /\ WriterShouldBeUnblocked => \* The reader is going to read more: \/ ReadLimit > Len(Got) /\ ReceiverLive \* The reader is about to increase WriteLimit: \/ (\/ pc[ReceiverReadID] = "recv_check_notify_read" /\ NotifyRead \/ pc[ReceiverReadID] = "recv_notify_read") /\ WriteLimit < Len(Got) + BufferSize \* The receiver is about to notify us of shutdown: \/ pc[ReceiverCloseID] \in {"recv_notify_closed"} /\ NotifyFlagsCorrect
If a process is on a path to being blocked then it must have set its notify flag. NotifyFlagsCorrect says that in that case, the flag it still set, or the interrupt has been sent, or the other process is just about to trigger the interrupt.
I managed to use that to prove that the sender’s steps preserved I, but I needed a little extra to finish the receiver proof. At this point, I finally spotted the obvious invariant (which you, no doubt, saw all along): whenever NotifyRead is still set, the sender has accurate information about the buffer.
1 2 3 4 5
/\ NotifyRead => \* The sender has accurate information about the buffer: \/ WriteLimit = Len(Got) + BufferSize \* Or the flag is being cleared right now: \/ pc[ReceiverReadID] = "recv_check_notify_read"
That’s pretty obvious, isn’t it? The sender checks the buffer after setting the flag, so it must have accurate information at that point. The receiver clears the flag after reading from the buffer (which invalidates the sender’s information).
Now I had a dilemma. There was obviously going to be a matching property about NotifyWrite. Should I add that, or continue with just this? I was nearly done, so I continued and finished off the proofs.
With I proved, I was able to prove some other nice things quite easily:
1 2 3 4 5
THEOREM /\ I /\ SenderLive /\ ReceiverLive /\ \/ pc[SenderWriteID] = "sender_ready" \/ pc[SenderWriteID] = "sender_blocked" /\ ~SpaceAvailableInt => ReadLimit = Len(Got) + Len(Buffer)
That says that, whenever the sender is idle or blocked, the receiver will read everything sent so far, without any further help from the sender. And:
1 2 3 4
THEOREM /\ I /\ SenderLive /\ ReceiverLive /\ pc[ReceiverReadID] \in {"recv_await_data"} /\ ~DataReadyInt => WriteLimit = Len(Got) + BufferSize
That says that whenever the receiver is blocked, the sender can fill the buffer. That’s pretty nice. It would be possible to make a vchan system that e.g. could only send 1 byte at a time and still prove it couldn’t deadlock and would always deliver data, but here we have shown that the algorithm can use the whole buffer. At least, that’s what these theorems say as long as you believe that ReadLimit and WriteLimit are defined correctly.
With the proof complete, I then went back and deleted all the stuff about ReadLimit and WriteLimit from I and started again with just the new rules about NotifyRead and NotifyWrite. Instead of using WriteLimit = Len(Got) + BufferSize to indicate that the sender has accurate information, I made a new SenderInfoAccurate that just returns TRUE whenever the sender will fill the buffer without further help. That avoids some unnecessary arithmetic, which TLAPS needs a lot of help with.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
(* The sender's information is accurate if whenever it is going to block, the buffer really is full. *) SenderInfoAccurate == \* We have accurate information: \/ Len(Buffer) + free = BufferSize \* In these states, we're going to check the buffer before blocking: \/ pc[SenderWriteID] \in {"sender_ready", "sender_request_notify", "sender_write", "sender_recheck_len", "sender_check_recv_live", "Done"} \/ pc[SenderWriteID] \in {"sender_request_notify"} /\ free < Len(msg) \* If we've been signalled, we'll immediately wake next time we try to block: \/ SpaceAvailableInt \* We're about to write some data: \/ /\ pc[SenderWriteID] \in {"sender_write_data"} /\ free >= Len(msg) \* But we won't need to block \* If we wrote all the data we intended to, we'll return without blocking: \/ /\ pc[SenderWriteID] \in {"sender_check_notify_data", "sender_notify_data"} /\ Len(msg) = 0
By talking about accuracy instead of the write limit, I was also able to include “Done” in with the other happy cases. Before, that had to be treated as a possible problem because the sender can’t use the full buffer when it’s Done.
With this change, the proof of Spec => []I became much simpler (384 lines shorter). And most of the remaining steps were trivial.
The ReadLimit and WriteLimit idea still seemed useful, though, but I found I was able to prove the same things from I. e.g. we can still conclude this, even if I doesn’t mention WriteLimit:
1 2 3 4
THEOREM /\ I /\ SenderLive /\ ReceiverLive /\ pc[ReceiverReadID] \in {"recv_await_data"} /\ ~DataReadyInt => WriteLimit = Len(Got) + BufferSize
That’s nice, because it keeps the invariant and its proofs simple, but we still get the same result in the end.
I initially defined WriteLimit to be the number of bytes the sender could write if the sending application wanted to send enough data, but I later changed it to be the actual number of bytes it would write if the application didn’t try to send any more. This is because otherwise, with packet-based sends (where we only write when the buffer has enough space for the whole message at once) WriteLimit could go down. e.g. we think we can write another 3 bytes, but then the application decides to write 10 bytes and now we can’t write anything more.
The limit theorems above are useful properties, but it would be good to have more confidence that ReadLimit and WriteLimit are correct. I was able to prove some useful lemmas here.
First, ReceiverRead steps don’t change ReadLimit (as long as the receiver hasn’t closed the connection):
1 2 3
THEOREM ReceiverReadPreservesReadLimit == ASSUME I, ReceiverLive, ReceiverRead PROVE UNCHANGED ReadLimit
This gives us a good reason to think that ReadLimit is correct:
When the receiver is blocked it cannot read any more than it has without help.
ReadLimit is defined to be Len(Got) then, so ReadLimit is obviously correct for this case.
Since read steps preserve ReadLimit, this shows that ReadLimit is correct in all cases.
e.g. if ReadLimit = 5 and no other processes do anything, then we will end up in a state with the receiver blocked, and ReadLimit = Len(Got) = 5 and so we really did read a total of 5 bytes.
I was also able to prove that it never decreases (unless the receiver closes the connection):
1 2 3
THEOREM ReadLimitMonotonic == ASSUME I, Next, ReceiverLive PROVE ReadLimit' >= ReadLimit
So, if ReadLimit = n then it will always be at least n, and if the receiver ever blocks then it will have read at least n bytes.
I was able to prove similar properties about WriteLimit. So, I feel reasonably confident that these limit predictions are correct.
Disappointingly, we can’t actually prove Availability using TLAPS, because currently it understands very little temporal logic (see TLAPS limitations). However, I could show that the system can’t deadlock while there’s data to be transmitted:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
(* We can't get into a state where the sender and receiver are both blocked and there is no wakeup pending: *) THEOREM DeadlockFree1 == ASSUME I PROVE ~ /\ pc[SenderWriteID] = "sender_blocked" /\ ~SpaceAvailableInt /\ SenderLive /\ pc[ReceiverReadID] = "recv_await_data" /\ ~DataReadyInt /\ ReceiverLive <1> SUFFICES ASSUME /\ pc[SenderWriteID] = "sender_blocked" /\ ~SpaceAvailableInt /\ SenderLive /\ pc[ReceiverReadID] = "recv_await_data" /\ ~DataReadyInt /\ ReceiverLive PROVE FALSE OBVIOUS <1> NotifyFlagsCorrect BY DEF I <1> NotifyRead BY DEF NotifyFlagsCorrect <1> NotifyWrite <2> have = 0 BY DEF IntegrityI, I <2> QED BY DEF NotifyFlagsCorrect <1> SenderInfoAccurate /\ ReaderInfoAccurate BY DEF I <1> free = 0 BY DEF IntegrityI, I <1> Len(Buffer) = BufferSize BY DEF SenderInfoAccurate <1> Len(Buffer) = 0 BY DEF ReaderInfoAccurate <1> QED BY BufferSizeType (* We can't get into a state where the sender is idle and the receiver is blocked unless the buffer is empty (all data sent has been consumed): *) THEOREM DeadlockFree2 == ASSUME I, pc[SenderWriteID] = "sender_ready", SenderLive, pc[ReceiverReadID] = "recv_await_data", ReceiverLive, ~DataReadyInt PROVE Len(Buffer) = 0
I’ve included the proof of DeadlockFree1 above:
To show deadlock can’t happen, it suffices to assume it has happened and show a contradiction.
If both processes are blocked then NotifyRead and NotifyWrite must both be set (because processes don’t block without setting them, and if they’d been unset then an interrupt would now be pending and we wouldn’t be blocked).
Since NotifyRead is still set, the sender is correct in thinking that the buffer is still full.
Since NotifyWrite is still set, the receiver is correct in thinking that the buffer is still empty.
That would be a contradiction, since BufferSize isn’t zero.
If it doesn’t deadlock, then some process must keep getting woken up by interrupts, which means that interrupts keep being sent. We only send interrupts after making progress (writing to the buffer or reading from it), so we must keep making progress. We’ll have to content ourselves with that argument.
Experiences with TLAPS
The toolbox doesn’t come with the proof system, so you need to install it separately. The instructions are out-of-date and have a lot of broken links. In May, I turned the steps into a Dockerfile, which got it partly installed, and asked on the TLA group for help, but no-one else seemed to know how to install it either. By looking at the error messages and searching the web for programs with the same names, I finally managed to get it working in December. If you have trouble installing it too, try using my Docker image.
Once installed, you can write a proof in the toolbox and then press Ctrl-G, Ctrl-G to check it. On success, the proof turns green. On failure, the failing step turns red. You can also do the Ctrl-G, Ctrl-G combination on a single step to check just that step. That’s useful, because it’s pretty slow. It takes more than 10 minutes to check the complete specification.
TLA proofs are done in the mathematical style, which is to write a set of propositions and vaguely suggest that thinking about these will lead you to the proof. This is good for building intuition, but bad for reproducibility. A mathematical proof is considered correct if the reader is convinced by it, which depends on the reader. In this case, the “reader” is a collection of automated theorem-provers with various timeouts. This means that whether a proof is correct or not depends on how fast your computer is, how many programs are currently running, etc. A proof might pass one day and fail the next. Some proof steps consistently pass when you try them individually, but consistently fail when checked as part of the whole proof. If a step fails, you need to break it down into smaller steps.
Sometimes the proof system is very clever, and immediately solves complex steps. For example, here is the proof that the SenderClose process (which represents the sender closing the channel), preserves the invariant I:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
LEMMA SenderClosePreservesI == I /\ SenderClose => I' <1> SUFFICES ASSUME I, SenderClose PROVE I' OBVIOUS <1> IntegrityI BY DEF I <1> TypeOK BY DEF IntegrityI <1> PCOK BY DEF IntegrityI <1>1. CASE sender_open <2> USE <1>1 DEF sender_open <2> UNCHANGED << pc[SenderWriteID], pc[ReceiverReadID], pc[ReceiverCloseID] >> BY DEF PCOK <2> pc'[SenderCloseID] = "sender_notify_closed" BY DEF PCOK <2> TypeOK' BY DEF TypeOK <2> PCOK' BY DEF PCOK <2> IntegrityI' BY DEF IntegrityI <2> NotifyFlagsCorrect' BY DEF NotifyFlagsCorrect, I <2> QED BY DEF I, SenderInfoAccurate, ReaderInfoAccurate, CloseOK <1>2. CASE sender_notify_closed <2> USE <1>2 DEF sender_notify_closed <2> UNCHANGED << pc[SenderWriteID], pc[ReceiverReadID], pc[ReceiverCloseID] >> BY DEF PCOK <2> pc'[SenderCloseID] = "Done" BY DEF PCOK <2> TypeOK' BY DEF TypeOK <2> PCOK' BY DEF PCOK <2> IntegrityI' BY DEF IntegrityI <2> NotifyFlagsCorrect' BY DEF NotifyFlagsCorrect, I <2> QED BY DEF I, SenderInfoAccurate, ReaderInfoAccurate, CloseOK <1>3. QED BY <1>1, <1>2 DEF SenderClose
A step such as IntegrityI' BY DEF IntegrityI says “You can see that IntegrityI will be true in the next step just by looking at its definition”. So this whole lemma is really just saying “it’s obvious”. And TLAPS agrees.
At other times, TLAPS can be maddeningly stupid. And it can’t tell you what the problem is - it can only make things go red.
For example, this fails:
1 2 3 4 5
THEOREM ASSUME pc' = [pc EXCEPT ![1] = "l2"], pc[2] = "l1" PROVE pc'[2] = "l1" OBVIOUS
We’re trying to say that pc[2] is unchanged, given that pc' is the same as pc except that we changed pc[1]. The problem is that TLA is an untyped language. Even though we know we did a mapping update to pc, that isn’t enough (apparently) to conclude that pc is in fact a mapping. To fix it, you need:
1 2 3 4 5 6
THEOREM ASSUME pc \in [Nat -> STRING], pc' = [pc EXCEPT ![1] = "l2"], pc[2] = "l1" PROVE pc'[2] = "l1" OBVIOUS
The extra pc \in [Nat -> STRING] tells TLA the type of the pc variable. I found missing type information to be the biggest problem when doing proofs, because you just automatically assume that the computer will know the types of things. Another example:
1 2 3 4 5
THEOREM ASSUME NEW x \in Nat, NEW y \in Nat, x + Min(y, 10) = x + y PROVE Min(y, 10) = y OBVIOUS
We’re just trying to remove the x + ... from both sides of the equation. The problem is, TLA doesn’t know that Min(y, 10) is a number, so it doesn’t know whether the normal laws of addition apply in this case. It can’t tell you that, though - it can only go red. Here’s the solution:
1 2 3 4 5
THEOREM ASSUME NEW x \in Nat, NEW y \in Nat, x + Min(y, 10) = x + y PROVE Min(y, 10) = y BY DEF Min
The BY DEF Min tells TLAPS to share the definition of Min with the solvers. Then they can see that Min(y, 10) must be a natural number too and everything works.
Another annoyance is that sometimes it can’t find the right lemma to use, even when you tell it exactly what it needs. Here’s an extreme case:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
LEMMA TransferFacts == ASSUME NEW src, NEW src2, \* (TLAPS doesn't cope with "NEW VARAIBLE src") NEW dst, NEW dst2, NEW i \in 1..Len(src), src \in MESSAGE, dst \in MESSAGE, dst2 = dst \o Take(src, i), src2 = Drop(src, i) PROVE /\ src2 \in MESSAGE /\ Len(src2) = Len(src) - i /\ dst2 \in MESSAGE /\ Len(dst2) = Len(dst) + i /\ UNCHANGED (dst \o src) PROOF OMITTED LEMMA SameAgain == ASSUME NEW src, NEW src2, \* (TLAPS doesn't cope with "NEW VARAIBLE src") NEW dst, NEW dst2, NEW i \in 1..Len(src), src \in MESSAGE, dst \in MESSAGE, dst2 = dst \o Take(src, i), src2 = Drop(src, i) PROVE /\ src2 \in MESSAGE /\ Len(src2) = Len(src) - i /\ dst2 \in MESSAGE /\ Len(dst2) = Len(dst) + i /\ UNCHANGED (dst \o src) BY TransferFacts
TransferFacts states some useful facts about transferring data between two variables. You can prove that quite easily. SameAgain is identical in every way, and just refers to TransferFacts for the proof. But even with only one lemma to consider - one that matches all the assumptions and conclusions perfectly - none of the solvers could figure this one out!
My eventual solution was to name the bundle of results. This works:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
TransferResults(src, src2, dst, dst2, i) == /\ src2 \in MESSAGE /\ Len(src2) = Len(src) - i /\ dst2 \in MESSAGE /\ Len(dst2) = Len(dst) + i /\ UNCHANGED (dst \o src) LEMMA TransferFacts == ASSUME NEW src, NEW src2, NEW dst, NEW dst2, NEW i \in 1..Len(src), src \in MESSAGE, dst \in MESSAGE, dst2 = dst \o Take(src, i), src2 = Drop(src, i) PROVE TransferResults(src, src2, dst, dst2, i) PROOF OMITTED LEMMA SameAgain == ASSUME NEW src, NEW src2, NEW dst, NEW dst2, NEW i \in 1..Len(src), src \in MESSAGE, dst \in MESSAGE, dst2 = dst \o Take(src, i), src2 = Drop(src, i) PROVE TransferResults(src, src2, dst, dst2, i) BY TransferFacts
Most of the art of using TLAPS is in controlling how much information to share with the provers. Too little (such as failing to provide the definition of Min) and they don’t have enough information to find the proof. Too much (such as providing the definition of TransferResults) and they get overwhelmed and fail to find the proof.
It’s all a bit frustrating, but it does work, and being machine checked does give you some confidence that your proofs are actually correct.
Another, perhaps more important, benefit of machine checked proofs is that when you decide to change something in the specification you can just ask it to re-check everything. Go and have a cup of tea, and when you come back it will have highlighted in red any steps that need to be updated. I made a lot of changes, and this worked very well.
The TLAPS philosophy is that
If you are concerned with an algorithm or system, you should not be spending your time proving basic mathematical facts. Instead, you should assert the mathematical theorems you need as assumptions or theorems.
So even if you can’t find a formal proof of every step, you can still use TLAPS to break it down into steps than you either can prove, or that you think are obvious enough that they don’t require a proof. However, I was able to prove everything I needed for the vchan specification within TLAPS.
The final specification
I did a little bit of tidying up at the end. In particular, I removed the want variable from the specification. I didn’t like it because it doesn’t correspond to anything in the OCaml implementation, and the only place the algorithm uses it is to decide whether to set NotifyWrite, which I thought might be wrong anyway.
I changed this:
1 2
recv_got_len: if (have >= want) goto recv_read_data else NotifyWrite := TRUE;
to:
1 2 3 4 5 6
recv_got_len: either { if (have > 0) goto recv_read_data else NotifyWrite := TRUE; } or { NotifyWrite := TRUE; };
That always allows an implementation to set NotifyWrite if it wants to, or to skip that step just as long as have > 0. That covers the current C behaviour, my proposed C behaviour, and the OCaml implementation. It also simplifies the invariant, and even made the proofs shorter!
I put the final specification online at spec-vchan. I also configured Travis CI to check all the models and verify all the proofs. That’s useful because sometimes I’m too impatient to recheck everything on my laptop before pushing updates.
You can generate a PDF version of the specification with make pdfs. Expressions there can be a little easier to read because they use proper symbols, but it also breaks things up into pages, which is highly annoying. It would be nice if it could omit the proofs too, as they’re really only useful if you’re trying to edit them. I’d rather just see the statement of each theorem.
The original bug
With my new understanding of vchan, I couldn’t see anything obvious wrong with the C code (at least, as long as you keep the connection open, which the firewall does).
I then took a look at ocaml-vchan. The first thing I noticed was that someone had commented out all the memory barriers, noting in the Git log that they weren’t needed on x86. I am using x86, so that’s not it, but I filed a bug about it anyway: Missing memory barriers.
The other strange thing I saw was the behaviour of the read function. It claims to implement the Mirage FLOW interface, which says that read “blocks until some data is available and returns a fresh buffer containing it”. However, looking at the code, what it actually does is to return a pointer directly into the shared buffer. It then delays updating the consumer counter until the next call to read. That’s rather dangerous, and I filed another bug about that: Read has very surprising behaviour. However, when I checked the mirage-qubes code, it just takes this buffer and makes a copy of it immediately. So that’s not the bug either.
Also, the original bug report mentioned a 10 second timeout, and neither the C implementation nor the OCaml one had any timeouts. Time to look at QubesDB itself.
QubesDB accepts messages from either the guest VM (the firewall) or from local clients connected over Unix domain sockets. The basic structure is:
1 2 3 4 5 6
while True: await vchan event, local client data, or 10 second timeout while vchan.receive_buffer non-empty: handle_vchan_data() for each ready client: handle_client_data()
The suspicion was that we were missing a vchan event, but then it was discovering that there was data in the buffer anyway due to the timeout. Looking at the code, it does seem to me that there is a possible race condition here:
A local client asks to send some data.
handle_client_data sends the data to the firewall using a blocking write.
The firewall sends a message to QubesDB at the same time and signals an event because the firewall-to-db buffer has data.
QubesDB gets the event but ignores it because it’s doing a blocking write and there’s still no space in the db-to-firewall direction.
The firewall updates its consumer counter and signals another event, because the buffer now has space.
The blocking write completes and QubesDB returns to the main loop.
QubesDB goes to sleep for 10 seconds, without checking the buffer.
I don’t think this is the cause of the bug though, because the only messages the firewall might be sending here are QDB_RESP_OK messages, and QubesDB just discards such messages.
I managed to reproduce the problem myself, and saw that in fact QubesDB doesn’t make any progress due to the 10 second timeout. It just tries to go back to sleep for another 10 seconds and then immediately gets woken up by a message from a local client. So, it looks like QubesDB is only sending updates every 10 seconds because its client, qubesd, is only asking it to send updates every 10 seconds! And looking at the qubesd logs, I saw stacktraces about libvirt failing to attach network devices, so I read the Xen network device attachment specification to check that the firewall implemented that correctly.
I’m kidding, of course. There isn’t any such specification. But maybe this blog post will inspire someone to write one…
Conclusions
As users of open source software, we’re encouraged to look at the source code and check that it’s correct ourselves. But that’s pretty difficult without a specification saying what things are supposed to do. Often I deal with this by learning just enough to fix whatever bug I’m working on, but this time I decided to try making a proper specification instead. Making the TLA specification took rather a long time, but it was quite pleasant. Hopefully the next person who needs to know about vchan will appreciate it.
A TLA specification generally defines two sets of behaviours. The first is the set of desirable behaviours (e.g. those where the data is delivered correctly). This definition should clearly explain what users can expect from the system. The second defines the behaviours of a particular algorithm. This definition should make it easy to see how to implement the algorithm. The TLC model checker can check that the algorithm’s behaviours are all acceptable, at least within some defined limits.
Writing a specification using the TLA notation forces us to be precise about what we mean. For example, in a prose specification we might say “data sent will eventually arrive”, but in an executable TLA specification we’re forced to clarify what happens if the connection is closed. I would have expected that if a sender writes some data and then closes the connection then the data would still arrive, but the C implementation of vchan does not always ensure that. The TLC model checker can find a counter-example showing how this can fail in under a minute.
To explain why the algorithm always works, we need to find an inductive invariant. The TLC model checker can help with this, by presenting examples of unreachable states that satisfy the invariant but don’t preserve it after taking a step. We must add constraints to explain why these states are invalid. This was easy for the Integrity invariant, which explains why we never receive incorrect data, but I found it much harder to prove that the system cannot deadlock. I suspect that the original designer of a system would find this step easy, as presumably they already know why it works.
Once we have found an inductive invariant, we can write a formal machine-checked proof that the invariant is always true. Although TLAPS doesn’t allow us to prove liveness properties directly, I was able to prove various interesting things about the algorithm: it doesn’t deadlock; when the sender is blocked, the receiver can read everything that has been sent; and when the receiver is blocked, the sender can fill the entire buffer.
Writing formal proofs is a little tedious, largely because TLA is an untyped language. However, there is nothing particularly difficult about it, once you know how to work around various limitations of the proof checkers.
You might imagine that TLA would only work on very small programs like libvchan, but this is not the case. It’s just a matter of deciding what to specify in detail. For example, in this specification I didn’t give any details about how ring buffers work, but instead used a single Buffer variable to represent them. For a specification of a larger system using vchan, I would model each channel using just Sent and Got and an action that transferred some of the difference on each step.
The TLA Toolbox has some rough edges. The ones I found most troublesome were: the keyboard shortcuts frequently stop working; when a temporal property is violated, it doesn’t tell you which one it was; and the model explorer tooltips appear right under the mouse pointer, preventing you from scrolling with the mouse wheel. It also likes to check its “news feed” on a regular basis. It can’t seem to do this at the same time as other operations, and if you’re in the middle of a particularly complex proof checking operation, it will sometimes suddenly pop up a box suggesting that you cancel your job, so that it can get back to reading the news.
However, it is improving. In the latest versions, when you get a syntax error, it now tells you where in the file the error is. And pressing Delete or Backspace while editing no longer causes it to crash and lose all unsaved data. In general I feel that the TLA Toolbox is quite usable now. If I were designing a new protocol, I would certainly use TLA to help with the design.
TLA does not integrate with any language type systems, so even after you have a specification you still need to check manually that your code matches the spec. It would be nice if you could check this automatically, somehow.
One final problem is that whenever I write a TLA specification, I feel the need to explain first what TLA is. Hopefully it will become more popular and that problem will go away.
0 notes
Text
Securing the Digital Restaurant at the Edge
The concept of smart kitchens is starting to creep into the restaurant industry.
Whether it’s adopting cook and hold ovens that reduce energy costs or adopting food safety management systems like Navitas, newer technology is being introduced to help restaurants run more efficiently and of course produce more revenue.
Food waste of course is one of the main problems being attacked by newer technology since it can cost commercial kitchens between five and 20 percent all food purchased.
While all these advances are great and can help increase revenue while lowering costs, they also create other problems.
Even something as simple as feeding in data from thermometers attached to fridges and freezers and providing a continuous feed allows restaurants to know when equipment breaks down before the food spoils. Likewise, being able to measure consumption automatically produces more accurate orders. Capturing POS order feeds in real time and correlating to how much is actually in the fridge allows forecasting models to be generated.
A lot of these sensors are small and simply shoot off a continuous data stream over Bluetooth or WiFi to one or more computers in the back of the house. From there more interesting software can collect and process the data and make decisions.
Even machine learning software, a newer style of programming that deals with large datasets are starting to find their way into the restaurant. Some of you might wonder why this is not just going straight to “the cloud?” There are a few factors at work here – one the concept of latency “how fast can you talk to the cloud” is a problem if you need a decision on something immediately – as in “When do I switch this oven off?” or “When does the temperature change to 92?”. This is usually coupled with the second problem of having very large bandwidth needs – e.g.: “how fat of a pipe are we shoving all of this data into?” When you have a multitude of sensors continuously feeding data back this bandwidth problem becomes larger.

While all these advances are great and can help increase revenue while lowering costs, they also create other problems. Questions such as: how do you properly secure the computers and more importantly how do you actually manage them across your fleet of stores, take on greater significance.Even if you only have a handful of pizzerias in a small geographical area you are already dealing with more physical locations than your average Silicon Valley company has to deal with so securing and managing them become a task in and of itself.
McDonalds for instance has 14,000 locations in the US and 35,000 worldwide. DineEquity has about 1,900 Applebees and 1,650 IHOP locations. Now contrast that with the 56 total datacenters that Amazon Web Services, the dominant cloud provider, currently has. While they obviously eclipse the number of servers per location restaurants eclipse them on the number of locations themselves. Managing all of this becomes a very different task especially since there isn’t an army of 20,000 engineers working on it like the big FANG companies. The colloquial term for deploying servers in situations like this is being called the edge.
Chick-fil-A recently started introducing their edge server equipment to their chain of stores, so they could measure the overall efficiency of operations. The ability to know how many fries they should be frying at any given time or what the day’s demand for chicken might be can all be controlled and predicted when data is collected, and software is applied. What’s better is when it’s applied across the entirety of your store fleet you start getting insights you simply could not achieve before. While this is a newer practice you’ll start to see this everywhere in the coming years.
When you install smart kitchen hardware, remember the smart part always comes with software and that needs to live somewhere. Commonly, operating systems such as Windows and Linux are prevalent, but they are notorious for causing a lot of security problems. However, a third option known as unikernels also exists. Ask your vendors if they support that as a deployment option. Being self-contained they prohibit applications from being used as landing pads for hackers that might just be a hop, skip and a jump to things like kitchen display systems or worse your QuickBooks or timeclock system.
It’s hard enough to secure one computer in one location. When you have many locations, it becomes tougher. Just as you install security cameras, put locks on your doors and employ heavy duty safes you also need to secure the digital infrastructure you have, and the reality is that many systems are simply wide open.
Your restaurants are going to reap big rewards for taking advantage of new smart kitchen equipment but keep in mind that digital brings its own security challenges and the edge is uncharted territory.
Securing the Digital Restaurant at the Edge posted first on happyhourspecialsyum.blogspot.com
0 notes
Text
The Next Generation of Cloud Computing
Cloud isn’t for technology geeks anymore as by now, a majority of organizations have implemented cloud for improved efficiency in their business processes. A particular enterprise on cloud can reap many benefits as they are able to scale their resources whenever there are heavy business demands. Storage, which the cloud offers is such an easy feature which allows a user to store, retrieve and move their data seamlessly. Security on cloud is unmatched as compared to any other platforms. The simplicity of cloud is what makes it so easy to implement it in a business because it technically provides everything an admin might need to carry out his business functions smoothly.
It is important to understand what newer technologies can be provided by cloud in the future because innovation doesn’t stop and there are always advancements and improvements when it comes to one particular technology. Cloud taps into the expertise of an enterprise to bring the best out of them. Cloud computing has come a long way, from being initially adopted for high efficiency and saving money, to emerging as a platform for the best of innovations.
What the future holds for Cloud Computing?
Almost everything is connected to the cloud one way or another - except if anything is specifically kept in a local storage for security purposes. There are many opportunities and capabilities in cloud computing. There are many predictions when it comes to the future of cloud computing as it can open doors for newer services, platforms, applications and much more. Innumerable possibilities paves the way for innumerable innovations. In the next decade, cloud computing will be an integral part of each humans life because it will connect all the useables to a single platform.
In this article we take a look at the next generation cloud technologies which will shape the cloud computing future and will provide a much evolved technology.
Unikernels
To say the least Unikernels are infrastructure virtualization space. It is an executable image which can be executed natively on a particular hypervisor without the help of a separate operating system. The image consists of an application code and operating systems functions which are necessary for the application. Unikernels are built up of library operating system which is nothing but collections of libraries which represent an operating system’s important capabilities. There has been various virtualizations in cloud computing and Unikernels is the latest hypervisor virtualization technology in the emerging containers concept.
CaaS
Container as a Service (CaaS) is an offering from cloud providers which provides container orchestration and compute resources. The framework can be used by the developers through API or a web interface for easy management of container. One can say that CaaS is a new layer for cloud platform for application deployment. This point towards the tools which are aimed at relaxing stress between the operations staff and the development team when it is about pushing application content and monitoring application.
Serverless Architecture
The cloud has led to shutting down data centers because CIOs believe in the services provided by cloud computing and how it has been a boon for their business. IT heads rent a mix of tools from a couple of vendors when they need extra processing power or storage. IT leaders are searching for a more cost efficient way to rent computing power and rather than managing a cloud architecture, they now wish to go serverless. Cloud is now being used just to fuel applications and other functions with serverless computing now in the picture. Only when resources need to be provisioned, the cloud is called upon to do this job. Internet of Things (IoT) can be a good example of such event based computing.
Software Defined Networking (SDN)
Software-defined networking is rapidly becoming a key component in data centers for automation. Software-defined networking provides efficient ways to manage virtualization saves cost and offers speedy service delivery. It gives data center managers the control to manage each and every aspect of a data center which results in higher agility to manage and upgrade their hardware. Modern data centers have become too complex to be managed by assigned personnel and thus, it is important to use an automation tool. It helps enterprises to enhance their security by minimizing vulnerabilities caused by humans.
Conclusion
Cloud computing has a bright future as it holds many technological breakthroughs and newer innovations. Technology which is implemented in the market today probably won’t be kindred tomorrow. The constant change which only leads to better upgradations will help many organizations reach their potential and achieve their desired targets. The cloud will bring much more benefits to businesses that one is able to imagine now.
0 notes
Text
[파코즈] 해뽐발 보메 수령..잘못왔어요 - 2018-04-16 23:22:38
[뽐뿌] 중고차) 아슬란 vs K9 - 2018-04-16 23:22:38
[딴지일보] 생활의 달인 냉면집 - 2018-04-16 23:22:19
[딴지일보] 주진형이가 금감원장에 오르면 노동계에서 장난아 - 2018-04-16 23:22:19
[오늘의 유머] 천조국식 조롱법 - 2018-04-16 23:22:11
[SLRCLUB] 케논 P&I 입장권 문자 받으셨나요. - 2018-04-16 23:17:49
[SLRCLUB] 아무리 이쁘고 성격 좋아도.. - 2018-04-16 23:17:49
[SLRCLUB] *현직 누드모델의 화려한 출사계획* - 2018-04-16 23:17:49
[루리웹(자유)] 작년은 진짜 모험 그자체였네요 - 2018-04-16 23:17:40
[딴지일보] 최근 딴게 분탕질 하는 글 올리는 넘들 (이읍 - 2018-04-16 23:17:18
[오늘의 유머] 유명한 탐정 근황 - 2018-04-16 23:17:11
[SLRCLUB] 캡쳐원 다시 질문 드립니다 - 2018-04-16 23:13:02
[SLRCLUB] micro sd 64g D700 테스트 결과 - 2018-04-16 23:13:02
[SLRCLUB] 중고차 고민 미니쿠퍼jpg. - 2018-04-16 23:13:02
[82쿡] 30년만에 만난 옛친구 너무 반가웠는데... - 2018-04-16 23:13:00
[보배드림] Dream car. ( 차량 후원후기. 스압) - 2018-04-16 23:12:58
[뽐뿌] [청원진행중] 선관위의 직무 유기에 대한 조사 - 2018-04-16 23:12:45
[뽐뿌] 홍준표씨 국회대책비 유용 및 매월 4~5000 - 2018-04-16 23:12:45
[딴지일보] (주목)딴지 여러분! 선관위 직무유기 감사 청 - 2018-04-16 23:12:21
[오늘의 유머] 오늘을 잊지 않겠습니다 - 2018-04-16 23:12:14
[오늘의 유머] 블츠 장인.gif - 2018-04-16 23:12:14
[해외 IT 이야기] Elon Musk의 최신 SpaceX 아이디어에는 파티 풍선과 바운스 하우스가 포함됩니다. - 2018-04-16 14:18:15
[해외 IT 이야기] DeferPanic은 unikernel 개념을 대중화하기 위해 $ 1.5 M 씨드 라운드를 보장합니다. - 2018-04-16 14:17:14
[해외 IT 이야기] 올해 메가 드라이브 미니 출시 예정 - 2018-04-16 13:52:08
[해외 IT 이야기] 미국은 사이버 보안 부 (Department of Cybersecurity) - 2018-04-16 13:35:19
[IT 블로거들 이야기] Something wicked happened resolving ‘ftp.us.debian.org:http’ (-11 – System error) - 2018-04-16 13:23:53
0 notes