#Jidoteki
Explore tagged Tumblr posts
Text
Meta OVA release v13 (1.13)
At the end of November 2019, we quietly released v1.13 of our On-Prem Meta appliance.
This release contains a few security fixes, minor bug fixes, and a major OS upgrade from TinyCore 7.x to 10.x.
Some notable changes below:
Improved token setup
One feature which we quickly implemented in the Jidoteki Admin API was the ability to configure the initial API token on first-use. Unfortunately this has some important security implications for those deploying an OVA in a hostile network (or on a public network).
To fix this, we've enabled the ability to optionally enable our "first-run" feature (disabled by default). This generates a random passphrase as an API token when the OVA is first launched. The random passphrase can only be seen from the console (ex: by an infrastructure admin), or programmatically by an app running on the OVA appliance.
This initial passphrase must then be used to reset the API token and complete the "first-run" process.
Broken GitHub integration
Our GitHub integration was broken since the day we launched it. Although it worked on our test appliance because we weren't testing it correctly. Oops. This is fixed, and the Meta OVA can now receive GitHub push webhook events and automatically pull the latest code changes.
New options in certain API calls
We're often adding new input/output parameters to API calls to provide more functionality for our customers without breaking existing functionality. A favourite of mine is the builddate, which is central to pretty much every API call. Certain API endpoints required you to parse the reply to determine the builddate, so for those endpoints we now return that value directly at the top level of the JSON response, to facilitate chaining API calls which depend on the builddate.
We've also added the ability to suppress the log output when polling for a build's status. The log output could easily reach a few hundred KB, and polling that much data every 5 seconds was not efficient. Now it can be disabled and the status output reduced to just a few KB instead.
Finally, for our customers building multiple flavours of the same appliance (ex: small, medium, large), it is now possible to only build the small OVA, or any combination of flavours through the ova_files parameter on the POST /builds endpoint. This is good for testing when you only need to test one OVA (since technically, the only difference between various flavours are the disk sizes). For production builds, the ova_files parameter can simply be omitted.
The future
Ths year we've begun working on Meta OVA v20 (v2.x) which is so far quite a significant improvement.
Our focus has always been to help our customers deploy Enterprise on-prem virtual appliances, so with that in mind, our next OVA will contain many more enterprise features which can also be integrated into customer appliances.
We've also recently launched a new set of Open Source tools for working with the Meta OVA, so make sure to read our next blog post for details on that.
As usual, feel free to contact us and tell us your story. We'll be happy to help you get up and running with On-Prem Meta so you can start building and deploying your own on-prem appliances.
0 notes
Text
Meta OVA release v12
Today we're releasing our On-Prem Meta OVA release v12.
This release contains a few bug fixes, and a ton of awesome new features.
Update Oct. 1, 2018: We shipped a new minor update with a ton of performance optimizations.
Extensions management
A feature added to our TODO list back in 2016, it is now possible to manage the TinyCore .tcz extensions directly through the Meta UI or API. Not only can you view all the existing extensions on disk, but it's also possible to upload new .tcz extensions and start building with them. It is not yet possible to "automatically" download extensions from the official TinyCore repos, but that will be added eventually.
OVA signing
VirtualBox and VMware provide the ability to validate an imported OVA's "signature". This feature is not well known, but it's similar to signed Android or iOS apps. Essentially, it allows the person importing the OVA to confirm who built the OVA, and to ensure it wasn't tampered with in transit. When uploading RSA certificates to the Meta OVA, they will automatically be used to sign all future OVAs.
While working on this feature, we discovered an issue with the way our .vmdk disks were created, so we fixed that as well, which led to the following two additional features:
Auto-generating VMDK disks
With our new approach to creating .vmdk disks, it is now possible to create them on-the-fly for every build. This provides our customers with more flexibility for their OVA configurations, since the disks don't need to be created in advance, by us, anymore. We allow arbitrary sizes in 1GiB increments and the value is specified in the Ansible builder config file.
Exporting to QCOW2, RAW, VHD
Since we were knee-deep in the OVA disk world, we decided to also provide the ability to export the OVA's OS disk to .qcow2, .raw, and .vhd formats.
This is a build-time option and results in a disk1..tar.gz which can be downloaded and easily deployed to various hypervisors and/or cloud providers (such as Xen, AWS, GCP, Azure).. and that leads to the last new feature of this Meta release:
Deploying an AMI on Amazon EC2
We opted not to implement an automated AWS -> AMI export feature for various reasons. Instead, "Cloud documentation" is accessible through from support section, with instructions on how to deploy the RAW OVA disk image to Amazon's EC2 service.
We plan on adding instructions for other cloud providers in our next release.
Of course, this means our On-Prem Meta appliance is also officially available on Amazon Web Services.
Performance optimizations
We weren't satisfied with the v1.12.0 release, so we worked hard to ship v1.12.1 as quickly as possible, with a ton of performance improvements.
To list some of the changes we've made:
trimmed hundreds of DB queries down to one
replaced gzip with pigz for multi-core parallel compression
disabled compression for update packages, at the expense of slightly larger files
lazy-loading images only when entering a specific help docs section
changed the way we create disk1 (NBD vs Loopback)
move certain files from memory to disk to reduce memory usage during a build
With these changes, our own Meta appliance builds are now 2x faster (down from 4 to 2 minutes).
One more thing
We've moved away from updating a wiki with our release information, and built dedicated Release pages and an RSS feed to centralize everything. We'll be rolling that out to all our customers shortly.
As usual, feel free to contact us and tell us your story.
0 notes
Text
Meta OVA release v11
We recently released v11 of our On-Prem Meta OVA. This release contains a handful of security updates, some great new features, and a huge memory optimization.
Memory
Previously, one build would require nearly 3x the amount of RAM for the size of the build (i.e: a 350MB build would require 1GB). We've made some changes which allow us to lower that requirement between 33% and 50% less RAM for each new build. This means it's now possible to do 2x more concurrent builds, or provision a 2x smaller OVA.
Features
The biggest new feature is the official integration of GitHub webhooks. The Meta OVA can now directly accept a GitHub push event, and can automatically clone/pull the repository changes prior to making a build. This removes the need to manually push Git changes to the appliance, and opens the door for automatically launching builds when a webhook is received.
We've also added a few new API endpoints and parameters, with no breaking changes.
Security
The usual suspects, such as Nginx. OpenSSL, and Qemu, have received security updates.
Of course, we've incorporated many bug fixes implemented over the last few months, particularly in regards to race conditions with certain asynchronous tasks.
We're already preparing for our v12 release, which will include a few long-awaited and often requested features. Stay tuned for more updates.
0 notes
Text
Announcing On-Prem, a new name for Jidoteki
Five years ago, we created the name "Jidoteki" (a Japanese word) to refer to the process of automatically creating virtual appliances. During that time, our business and approach has evolved, but the premise remains the same.
We decided to rename our product and service to On-Prem, to reflect what we truly care about.
To some, it may appear that On-Prem is just a generic word with the addition of being conversationally awkward. While we agree to some extent, our intention is to cement the fact that we strive to provide the absolute best on-premises experience with the creation of our enterprise appliances.
In the years since we started, we've helped our customers build and ship thousands of appliances, which are running stable in offline, secure environments of many Fortune 500 companies. We want to thank them for working with us all this time, and we plan to continue moving forward in this direction.
We've updated a few links, so please update your bookmarks:
Website: https://on-premises.com
GitHub: https://github.com/on-prem
Twitter: https://twitter.com/on_premises
IRC #On-Prem: ircs://chat.freenode.net:6697/on-prem
Note: We're not on Facebook, LinkedIn, Instagram, AngelList, or ProductHunt.
New Trial OVA
Today, we've also launched a new trial OVA for you to evaluate:
https://try.on-premises.com
It's a modified version of a typical enterprise virtual appliance created for our customers. It contains a sample stack: MySQL, Nginx, and NodeJS, as well as our open source REST API and Admin Dashboard.
The trial OVA is designed to help you see exactly what we mean when we say it's tiny, bullet-proof, and rock solid.
We're looking forward to continue helping businesses ship enterprise virtual appliances, and we'll be happy to help and answer any questions.
As usual, feel free to contact us and tell us your story.
0 notes
Text
Six months of unreleased features
Today, we finally released an update to Jidoteki Meta - v1.9.0, which contains a handful of security updates, and only one new major feature: backups.
The security updates are the typical ones you would find in any Linux system: OpenSSL, Nginx, Kernel, etc.
The backups feature makes it possible to quickly retrieve the most important Jidoteki Meta data (logs and database), and restore them on another appliance to get up and running quickly.
Of course, this release does contain a multitude of useful bug fixes and minor improvements, but nothing major like our previous releases.
During the last six months, we had our heads down pumping out feature after feature, adding new whiz-bang functionality, only to drop them a couple weeks upon completion. Why? Because focus.
One of our biggest concerns is adding features nobody will use. Since we're also daily users of Jidometa, we quickly realized that our "new features" were not actually very useful. They definitely had a cool factor, but if we aren't using those features daily, then they're not very important.
Who got the boot?
Some of the features we built include: lsyncd+rsync+ssh replication, auto-update from a remote server, user and group management, S3 object data storage, kexec fast reboots, and server-side HTML page rendering.
Yes, we've essentially been busy spinning our wheels, but the outcome is that our Jidometa OVA remains a 60Mb download, and does its job extremely well, with no overhead or confusion.
What's next?
We do plan to rehash some of those unreleased features into better and more lean implementations, but we'll wait for the right time for that (i.e: when we see a real need for it).
At the moment, we want to focus on a Jidometa appliance which makes our customers (and our own) lives simpler, while helping us work, develop, and build faster. We've listened to feedback and already have a list of important features to implement, but I promise we won't waste our time, this time around.
One of the main things to look forward to, is the ability to manage a cluster of Jidometa appliances from one central location. This would allow builds to be distributed across developer workstations, while allowing everyone to share their builds with the rest of the team. We want to provide the ability to manage the .tcz extensions uploaded to the appliance, so updating something like nodejs can be done without our intervention. Finally, we're looking at integrating SyncThing as part of every OVA (dev option enabled), to allow developers to quickly test and iterate on their app directly in an existing appliance, without going through the full build/update/reboot/test loop (note: we've been doing this for over a year inside our dev Jidometa appliance, and it's been a huge time saver).
Stay tuned for future updates.
0 notes
Text
Inside Jidometa: No more caching
In [part 2 of Inside Jidometa], we discussed our implementation of caching for JSON responses.
The initial goal was to solve the “slow page load” issues, but after a few months of running this in production, we eventually found ourselves battling more and more odd race conditions.
In retrospect, caching was a bad idea
I knew this from the start, but we were quite confident in our ability to implement a proper cache invalidation scheme to remove any possible race conditions.
The reality is that as our app grew, new features were added, and those features introduced more places where race conditions would surface.
Finding the root cause
I tasked myself with the goal of finding the exact root cause of our slow page loads. This required the disabling of caching, and then optimizing of SQL queries.
Our SQL backend was performing simple and multiple SQL queries per page load (~30 queries per load). This was extremely inefficient and un-necessary.
I identified every page which had such issues, and managed to reduce and combine queries down to 2 or 3 queries per page, thus significantly decreasing page load time.
Some numbers
Without caching, loading a page (or API call) with ~700 builds would previously take nearly 30 seconds.
After my optimizations, this was reduced to 300 milliseconds, or 0.3 seconds. Those changes were sufficient for us to completely remove our JSON caching implementation, and also fixed other random issues/race conditions as a side-effect.
The lesson here is: caching sucks. Don’t do it unless.. just don’t do it. It’s pretty much always a hack, and will eventually somehow someway end up biting you in the rear. Even if you’re a seasoned expert in race-condition handling and cache invalidation, other things which you didn’t even know exist will surface and cause headaches and random failures -- because of caching. The typical approach to solving caching issues is to add more caching (see Intel CPUs), or to perform more ugly hacks to work around the caching issues.
I’m a fan of using no caching, and then moving on with my happy life ;)
Moving forward
We released Jidoteki Meta v1.8 at the beginning of June, which included these changes and a few other extremely useful features.
We’re continuously working to improve Jidometa in order to provide the best, fastest, and most reliable way of consistently building virtual appliances designed to run on-premises.
As usual, feel free to contact us if you’re in the process of or thinking of providing your app to enterprise customers. We’ll be more than happy to help.
0 notes
Text
Bricked virtual appliances
We recently released v1.7.0 of Jidoteki Meta, and with it came a little unexpected surprise: it bricked a few customer appliances.
In this post, I’ll explain how not to brick a customer appliance, as well as a few techniques to recover when it happens.
Oops!
Our build process is entirely automated - duh, that’s why it’s called Jidoteki (automatically, in Japanese) - but our release process isn’t. There’s a series of roughly 40 steps on a long checklist prior to making a Jidometa release. Most tasks are automated and only take a few seconds to complete/validate, but others are manual and require a bit more time.
In our most recent release, we changed something which shouldn’t have been changed (the default /etc/fstab file), which prevented the /data disk from being mounted - but only on older deployments.
Problems compounded
To make matters worse, our fstab blunder had the cascading effect of not starting Openssh, Nginx, or the Jidoteki API services, which made remote administration impossible (thus impossible to update the appliance).
A missing validation
What we didn’t realize was that our release process didn’t include a step to validate our builds/updates against, you guessed it, older deployments! We did test the updates against slightly older appliances, but not against the oldest ones - the ones which some of our great customers were still running.
No worries though, we’ve added that to our process/checklist and can guarantee it won’t happen again.
Recovery
Luckily (and stupidly), our initial appliances shipped with a default root password (known only to us). We were able to provide instructions to login via the Linux console, start the necessary services, and then access the UI to upload an update package containing the fstab fix.
In our latest appliances, which are slightly more secure, there is no default root password anymore. In fact, logins are completely disabled, even by SSH. We provide the ability to change the admin password via the console GUI, but that only provides access to the files in /data (customer files), not root.
Recovery without password
We’ve customized the boot menu to provide just enough time to modify the boot command. Simply removing the ,/boot/rootfs entry will load the default TinyCore installation, which includes a root user with no password! omg! That’s a good thing. It means it’s still possible to fix a bricked appliance.
I know, it seems quite insecure at first glance, but the reality about a Linux virtual appliance is that, anyone with access to the host machine can get root. There’s no way to prevent that.
Other techniques include mounting the disk(s) in another appliance, or booting from a recovery ISO/CD.
In the end, there’s so many ways to obtain root access and to recover from such issues, that there’s no real point in preventing your customers from having it. Security through obscurity is a wasted effort.
Moving forward
To avoid bricking customer appliances, we’ve decoupled essential services from the boot process. They will start no matter what, and always provide remote administration capabilities.
Secondly, the design of Jidoteki appliances makes it very easy to either obtain root access (from the console, obviously not over the network), that we don’t need to worry about the consequences of a “bricked” appliance. The customer data is never touched by the updates, and they’re free to obtain access and perform a manual recovery procedure.
We’re working on an automated process which actually validates (integration test?) an appliance once it’s built, against a set of criteria (ex: does X service start, are disks mounted correctly, etc). We’ve already written the test suite and have been using it for a while on newly built appliances (not updated ones). Our last step is to integrate it to Jidometa and automatically run it against the builds/updates when they complete.
Finally, wouldn’t it be nice for an appliance to self-heal? Yes, it would, and we’re working on just that. The idea isn’t a new one (I implemented something similar in 2009 while working on a custom Linux OS), but essentially rather than overwrite the existing OS during an update, we could rename the file and have a second “recovery” boot option which boots from a working version when the primary one fails.
Contact us
As usual, if you’re planning on providing your On-Prem or Installer-based application to enterprise customers, contact us so we can discuss the details of your setup.
0 notes
Text
Saving settings on an immutable OS
As we’ve mentioned on multiple occasions, our Jidoteki virtual appliances are immutable and run entirely in memory. Customer data is stored on a secondary disk (disk2), while the OS and application is stored on the primary disk (disk1), and loaded in memory on boot.
System settings
The system settings are typically independent of the customer data. They define basic things such as network settings, admin passwords, SSH keys, and secondary storage type (local disk, NFS, etc). For that reason, those settings can’t be stored on disk2. This leaves us with no choice but to save them on the disk1.
Two existing approaches
There are two main approaches to this, as seen in various operating systems:
TinyCore Linux: The list of settings (files) are backed up to a .tgz with a specific command, and then automatically restored on boot.
Alpine Linux: All modified files are kept in memory and are committed to disk with a specific command, and then automatically restored on boot.
An even better approach
You probably saw this coming, but of course, we’ve improved on both those processes and released it as an open source tool: symlinktool.sh
I’ll begin by explaining why, and then how.
Why a new tool?
The main problem with the two existing approaches from TinyCore and Alpine are clear: a specific manual user action is required to persist the changes. If the end-user forgets to manually type the command, then the changes are lost on reboot.
The other problem with the TinyCore approach is the .tgz provides no insight into the history of changes. The file constantly gets overwritten and you have no way of rolling back to a previously good/known configuration. To their credit, the command-line tool does have a “-s” flag which essentially allows you to keep ONE (1) extra copy of the settings. You can’t roll-back to 2, 3, or even 10 configuration ago.
The other problem with the Alpine approach is any file can be modified and then overwritten on boot. In an immutable OS provided to customers, you definitely want them to edit some files, not all. The difference is subtle, but it’s essentially the difference between guaranteeing a functioning system, and not guaranteeing that they won’t accidentally overwrite /etc/fstab and brick the appliance.
How does symlinktool.sh work?
Now that we’ve cleared up the why, I’ll explain what our tool does differently.
For starters, the symlinktool.sh uses the exact same /opt/.filetool.lst as TinyCore Linux - which means it’s a drop-in replacement for the TinyCore filetool.sh. That .filetool.lst specifies the list of files and directories which should be persisted to disk. There’s also an /opt/.xfiletool.lst file which lists files to be excluded. This allows us to control exactly what settings files can/should be persisted, and those .filetool.lst/.xfiletool.lst files can’t be modified by the end user.
Secondly, the settings files are stored and versioned in a Git repository on disk1. Every time one of the system tools, or the Jidoteki Admin API makes changes to a settings file, it will run the symlinktool.sh which will commit the changes to the Git repo. Of course, if a file is edited manually, the changes will not be commited, but you can always enter the directory and type git diff to see the changes since the last commit. The end user can also manually commit changes to the Git repo.
Finally, and here’s the most magical property, once symlinktool.sh is used, as its name suggests, it generates symlinks in the original location of the file, and points them to their persistent location on disk1.
Example: /etc/shadow -> /mnt/sda1/ova-backups/etc/shadow
This means that ANY change to /etc/shadow will be automatically reflected on the persistent disk. Performing a Git commit is not necessary, as the changes will remain even after reboot.
We slightly modified the TinyCore Linux tc-restore.sh script to ensure the symlinks get restored on boot. Another benefit of our approach is it’s much faster than extracting a .tgz to memory.
What about a non-immutable OS?
A non-immutable OS, for example, Debian installed on disk, is quite different. On that system, any/every file can be modified and will automatically be persisted to disk. We consider this bad as it renders it almost impossible to guarantee working and atomic system updates the way Jidoteki does.
Our new tool, which we hope to deploy in our appliances very soon, is a perfect hybrid solution between immutable and non-immutable OS’s, and provides some nice features such as being able to see the exact change history of every “settings file”, and even manually restore old settings with just one command.
Feel free to check out the source on GitHub. It’s MIT licensed, so everyone is free to use and modify as they wish. Please submit a pull request if you can suggest some improvements.
If you’re currently deploying a SaaS or Installer-based software (ex: Java app) and want to provide an absolutely rock solid Enterprise virtual appliance to your customers. Please contact us so we can discuss your requirements.
Our pricing is clear and we can get you up and running within just a few hours up to ~1 week, depending on your setup.
0 notes
Text
Jidoteki: Updated pricing, easier to get started
We’ve recently updated our pricing to make it even easier to get started with Jidoteki.
Having celebrated 5 years of being in business a few days ago, we decided to make our service more accessible to companies, by providing a new pricing structure.
Previous Setup plan
Previously, we offered only one “Setup” plan to get started with Jidoteki. It included lots of custom work, consulting, and onboarding in order to get a custom appliance up and running. It’s great for companies with lots of custom requirements for their software to work on-premises.
One recent example involved us converting a Java-based installer application to an on-prem virtual appliance with custom tools and scripts.
Splitting the Setup plans
We’ve renamed the original Setup plan to “Pro Setup” plan, which includes a fully customized virtual appliance covering all your needs for the perfect on-prem appliance. It includes customizations at every level to ensure you can easily transition from Installer/SaaS -> to On-Premises.
Today, we’re also announcing the “Business Setup” plan, which includes a slightly more generic virtual appliance. We still provide the same open source tools, scripts, and features of our rock-solid enterprise virtual appliance, along with its great management features. The only difference is we decreased the amount of custom work to finalize the OVA. Of course, we’ll still customize a few things to ensure your app actually works as intended.
Benefits of the Business Setup
The main benefit of the new setup plan is that we can have the appliance ready in just a few hours. Typically with lots of customization, the time to completion can take between one week and up to one month, or more.
... ready in just a few hours
To be eligible for our Business Setup plan, we require your stack to conform to the typical software stacks such as LAMP, LEMP, MEAN, etc. With those types of stacks, we can get you up and running real quickly, and you can start providing your on-prem OVA to enterprise customers with very little delay.
Another great advantage is that it’s always possible to upgrade to the “Pro Setup” in the future, if the need arises.
Moreover, Jidoteki Meta will work just as well with either Setup plan, so you’re still free to maintain full control of your builds, on your own premises.
Get your feet wet
I want to extend an invitation to companies to try our service. We can pretty much guarantee you’ll love the end product and will want to continue working with us in the long-term. It’s a win-win situation, as you’ll also gain the ability to serve enterprise customers (which translates to $$$).
Contact us and we’ll be more than happy to discuss your situation and requirements.
0 notes
Text
Inside Jidometa: Loose coupling of customer data
[Read part 3 of Inside Jidometa]
One of the major differences between Jidoteki appliances, and most other setups, is that we isolate customer data and ensure it’s not tightly coupled to the system.
Looking at the graph above, we can see the customer data is stored on disk 2, while the OS/apps/kernel are stored on disk 1.
Issues with tight coupling
Adding disks to a virtual appliance is quite trivial. Moving them to another appliance is just as simple. We know this, so we made a plan to support it.
In a typical non-Jidoteki appliance, databases, log files, application settings and data are stored on the same disk. This tight coupling introduces problems which are almost impossible to workaround once deployed:
Failed OS updates could accidentally wipe all customer data
It’s impossible to expand the disk once it’s full
It’s difficult to move customer data to a newly deployed appliance
Customers may not have full control of their data
The disk must be sufficiently large from the start
Our disk setup
Jidoteki appliances use a minimum of two disks. The first disk is quite small (8GB sparse disk) and used exclusively for storing the system and custom settings (ex: network settings). The small OS disk means importing an OVA for the first time is very quick. The second disk is sized based on our customers’ requirements, but typically range between 10GB and 2TB (sparse disk), depending on the application. In any case, the customer can opt to not import the second disk, or simply delete it and re-create it with whatever size they want.
The best part about disk 2 is we deploy it using LVM, which allows anyone to attach a third, fourth.. eighth disk (max 8 disks) of virtually any size. If a deployed appliance uses 2TB and needs to increase to 42TB, an additional disk can be added and the appliance will automatically grow the LVM disk.
Other benefits to loose coupling
By dedicating the entire disk to the customer data, we can provide full permissions to the appliance admin (the customer) to login via SSH and modify any file(s) on that disk. The customer controls their data and remains free to alter it how they want (at their own risk, of course). We do this in our own on-premises virtual appliance, Jidoteki Meta (Jidometa), and guide our customers in doing it this way as well.
Another benefit is having the ability to detach the disk from the appliance, deploy a new appliance on a different (physical?) server, and then re-attach the disk. It’s not as fancy as live migration, but moving disks is effortless compared to transferring data with rsync or some other wild migration scheme.
Of course, if it’s that easy to move data, that means it’s just as easy to backup the data, or create a fault-tolerant setup for disaster recovery scenarios.
Finally, as mentioned in a previous blog post, dedicating the second disk to customer data means it can also be replaced by network storage, such as NFS, AoE, iSCSI, or NBD. I can’t imagine the logistics (and coding nightmare) of having to support network storage when the application was designed for storing on the same disk. With Jidoteki, we simply mount the network storage to /data, instead of mounting the local disk 2 to /data. No software changes required. Easy.
Moving forward
As always, we’re happy to discuss your requirements if you plan on providing an on-prem appliance to enterprise customers. We’re familiar with enterprise requirements, and try our best to provide the most rock-solid on-premises virtual appliances out there. In a few days we’ll be celebrating 5 years of doing this stuff. It really is our specialty, and we’d love to help more companies go on-prem, the right way.
Feel free to contact us, or signup for our new monthly newsletter to stay up-to-date with what we’re doing.
0 notes
Text
Inside Jidometa: A look at our Open Source Software
[Read part 2 of Inside Jidometa]
In this post, we’ll highlight some of the open source tools we’ve created and deploy in every Jidoteki appliance, including Jidometa itself.
The OS
We'll start from the buttom up. Jidometa is built on top of TinyCore Linux - a small footprint in-memory operating system built on GNU/Linux. We use the unmodified OS with a few minor changes.
The sources for the OS toolchain
The sources for the OS busybox
The TinyCore OS initramfs and kernel
The TinyCore OS scripts modifications
The kernel
TinyCore Linux ships with a slightly modified Linux kernel. In our tests, we were able to deploy our appliances with a completely unmodified (vanilla) Linux kernel, so we provide the full unmodified kernel sources.
The kernel build scripts
The sources for the Linux kernel
The extensions
Extensions are similar to Debian .deb packages and RedHat .rpm packages. In Jidometa, they are squashfs .tcz files which contain pre-compiled and stripped binaries. They're typically much smaller because we don't include man pages, headers, and other files not needed in an immutable OS.
The sources for the extensions
We have yet to publish all the build scripts for our extensions, but they all use basic commands:
./configure;make;make install
Anyone can easily rebuild them with the original sources.
The admin scripts
We always include our own Open Source scripts to help manage the appliance. The scripts vary in importance and are either written in POSIX shell, or PicoLisp. We're slowly working on replacing all our POSIX shell scripts with PicoLisp shell scripts.
The Jidoteki Admin scripts, which manage the appliance from the console
The Jidoteki Admin API scripts, which manage the appliance from a REST API
The helper scripts, which provide additional functionality to the appliance scripts
We have quite a few more administration scripts, but we haven't open sourced them yet.
The libraries
The Admin scripts are built on top of a set of open source PicoLisp libraries which provide the foundation for stable, tested, and functional tools. The libraries include:
A unit testing library, to help write simple unit tests for every PicoLisp script and library, and ensure correct functionality (as well as reduce bugs/regressions)
A JSON library, to natively parse and generate JSON documents directly in PicoLisp
A SemVer library, to help manage, compare, and validate appliance and update package versions
An API library, to help build simple REST APIs as quickly and easily as possible
The licenses
It's almost impossible to gather every single license for every single software used in the appliance. We provided the software source packages intact, which include all the unmodified license files as well.
Additional licenses can be found directly in the appliance in:
/usr/share/doc/License/
The ISOs
Since downloading all individual source packages can be quite troublesome at times, we also provide direct links to download ISOs which bundle all the sources together.
Fauxpen source
We're not running a fauxpen source operation here. Admittedly not all our code is Open Source, but we do our best to comply with the GNU and OSD licenses by making it easy for everyone to access the sources files for the appliances and binaries we distribute.
We're constantly releasing new open source tools and libraries, so make sure you sign-up for our mailing list to stay up-to-date on our work, tools, and solutions.
If you're looking to provide your customers with an on-premises virtual appliance, without being locked-in by your vendor, then contact us and we'll be happy to discuss your requirements.
0 notes
Text
Automatically updating a cluster of offline appliances
In this post, we’ll describe a new feature in Jidoteki Meta, which makes it possible to generate special update packages aimed at updating a cluster of appliances on an internal network.
A solid foundation
This feature builds on top of a recent feature, Full update packages, which can be applied to ANY appliance regardless of their version.
Some of our customers deploy multiple types of appliances to their enterprise customers. In those situations, where you have a cluster of node (slave) appliances and one or more server (master) appliances on your internal network, it can be quite bandwidth intensive to perform online updates. If the node appliances don’t have internet access, manual updates are required, which is troublesome when each appliance is on a different version.
Distributing updates
To solve this problem, we can now create a special update package for the server appliance which includes the Full update package of the node appliances.
This makes it possible for the server appliance to hold and distribute updates to each node appliance, acting as an update server, allowing the node appliances to automatically update themselves to the latest version without requiring internet access or manual intervention.
Creating the updates
In Jidoteki Meta, the option is labeled as the “Node source”, which is the build ID of any previously built appliance. When building a server update package, and a node_source is selected, that build’s Full update package is added to the build’s OVA in /mnt/sda1/boot/ , and it’s added to the build’s update packages.
Complexity reduction
From the outside, it seems like a complex feature, but in fact it’s very simple and extremely useful in situations where an admin has more than a handful of appliances to update.
With Jidoteki, we build our solutions to solve problems which not only our customers have, but also problems their enterprise customers have.
If you’re currently distributing your software on-premises, or planning on doing that sometime soon, please feel free to contact us, as we’re focused exclusively on making the absolute best virtual appliances and automated offline update process. We’ll be more than happy to discuss your situation and see how we can help.
0 notes
Text
New Jidoteki network storage options
One of the overlooked issues with deploying a virtual appliance is network storage. Enterprises typically have their own high-end storage platforms, and are better served when an appliance can utilize them.
In this post, we’ll look at various storage options we’ve made available for our Jidoteki virtual appliances.
In the past...
The initial appliances built with Jidoteki only offered one storage option: local disk. All application data would be written to disk2 - a local disk with a fixed size. This was fine for most end-users, but it has some limitations which are difficult to work around.
We’ve included NFS utilities for quite some time, but manual SSH access was required to configure it, and it could not be used as the primary storage for the application.
Moving forward
In the last few weeks, we’ve added support to automatically connect the storage to an external NFS drive, configured directly through our open source Jidoteki Admin Dashboard UI (included in every appliance).
At the same time, we realized NFS was not representative of the various network storage platforms out there, so we decided to also add support for AoE (ATA-over-Ethernet), iSCSI, and NBD.
Each storage platform requires only a couple configuration parameters and is very easy to setup through the web UI. The end-user in an enterprise environment can integrate their appliance into existing backup and disaster recovery schemes without worrying about pre-provisioning large disk images for local storage.
This feature is already available and shipping for all our customers. If you’re interested in deploying real on-premises virtual appliances to enterprise customers, feel free to contact us so we can discuss your requirements.
0 notes
Text
Battle: Bitnami VMs vs Jidoteki VMs
[author: Alex]
I discovered Bitnami a while back, but never took the time to try out their virtual appliances.
I recently stumbled on them again and decided to open up the Bitnami ownCloud virtual appliance, to compare it with a Jidoteki ownCloud virtual appliance I created just for testing. Here’s our post about a RPi ownCloud we built a while back. It’s almost identical to the x86_64 version.
Comparing the VMs
1. The first thing I noticed were the file sizes:
Bitnami: 448MB compressed, 1.5GB uncompressed (on disk) Jidoteki: 90MB compressed, 141MB uncompressed (on disk)
That’s a 10x difference in disk usage for a VM that does the exact same thing.
2. Something interesting about the disks:
Bitnami: 1 x 17GB disk (sda1) w/ext4 Jidoteki: 1 x 8GB disk (sda1) w/ext2, 1 x 250GB disk (sdb1) w/ext4 + LVM
I thought this Bitnami disk setup was weird, since ownCloud can easily require a lot of disk space for storing its data - why doesn’t the the VM ship with a 2nd disk by default? Jidoteki includes a 2nd disk setup with LVM, and automatically expands on boot when the user attaches a 3rd, 4th, 5th, or even 6th disk. Of course that is entirely configurable.
3. Network adapters
Bitnami: PCNet32 (AMD PCnet-PCI II) in Bridged mode Jidoteki: E1000 (Intel Pro/1000 MT Desktop) in NAT mode
I guess this doesn’t matter so much, since the user can change it themselves, but the pcnet32 adapter is a 10mbit adapter, compared with the e1000 which is 1gbit. Bridged mode by default will automatically make the VM accessible on the local network - before it has a chance to be configured. I think that’s a security risk, albeit a rather small one, at the expense of convenience - which is something I usually frown upon. We set ours to NAT by default, which limits access to the user’s local computer - default “secured” (I know, I know, NAT is not security..).
4. VM import and boot times were also quite interesting:
Bitnami: 5mins import, 5mins boot Jidoteki: 30seconds import, 30seconds boot
Import and boot were done with VirtualBox on my MacBook Air. Both will likely be much faster using VMware, or a faster computer. I’m not sure what the Bitnami appliance was doing behind the scenes during the boot process, since all output is suppressed early in the boot process.
Verdict: Jidoteki VMs are better
Ideas for Bitnami:
1) There is almost no output from the moment it boots until the login screen with the Bitnami banner. This made me question whether the boot was working or not.
2) The WRITE SAME error message is common for virtual machine disks. It’s not harmful, but scary. We suppress it with this code early in the boot process:
(See Gist on GitHub)
# Suppress WRITE SAME messages disks=`find /sys | grep max_write_same_blocks`
for i in $disks; do [ -f "$i" ] && echo 0 > "$i" done
3) Replace GRUB with Syslinux for faster boot.
4) Setup LVM by default, so the disk can be expanded in the future. LVM root is sometimes challenging, and quite difficult to recover, so perhaps follow our approach instead ;)
Comparing the VM contents
To compare the contents, I mounted the VMDK disk on a Linux system with the help of qemu-nbd.
1. Too many files
Bitnami: 67123 files and directories Jidoteki: 11 files and directories
No wonder the Bitnami OVA is so big, it has way too much stuff! ;)
Haha ok, so we cheated here. In fact the rootfs on the Jidoteki OVA contains compressed squashfs packages (TCZ extensions) which hold all the files for specific applications. This follows the TinyCore Linux architecture, which allows us to have only a few files on disk representing the entire OS, libraries and applications.
2. Open Source licenses
Bitnami: 77 files found in /opt/bitnami/licenses/, 0 NOTICE file Jidoteki: 8 files found in /usr/share/doc/License/, 1 NOTICE file
Bitnami does a remarkable job of including all licenses for every Open Source software used in the appliance. Unfortunately, there’s no indication on where to obtain the sources. For the record, that’s not good enough for GPL compliance, and it’s useless for people who want to modify the software or to see what patches were applied. The Jidoteki appliances include a NOTICE file in the License directory, with a URL to download all the individual sources, patches, and build scripts, as well as an ISO containing all the sources. Only the GPL requires that, but we do it for all the Open Source software we include.
3. Namespacing and separation
Bitnami: All changes namespaced to /opt/bitnami Jidoteki: All changes namespaced to /opt/jidoteki, except some things
Jidoteki scatters OS customizations all over the place and it’s a bit of a mess, but there’s one important difference: All OS customizations are contained in the rootfs. The base OS (corepure64) is completely unmodified except for the kernel modules which were updated. This is quite different from Bitnami, which is a full OS installation. That means on the Bitnami appliance it’s impossible to determine what parts of the OS were customized, and what shipped by default with the OS.
4. OS File and directory sizes
Bitnami: ~800MB full OS, full documentation, full kernel modules and headers, unstripped binaries, perl and python libs (probably required for Ubuntu). Jidoteki: ~12MB bare minimum OS, no documentation, optimized VM kernel modules, no headers, stripped binaries, no perl or python libriaries.
The OS used in Jidoteki appliances (TinyCore Linux for now...) is ridiculously small and lightweight. They made a great effort to cut out the cruft and leave us with a fully functioning system. The Bitnami OS seems to be completely un-optimized and accounts for half the disk space used in the appliance. That becomes an issue when you want to run multiple appliances side-by-side.
5. App Files and directory sizes
Bitnami: ~636MB for Bitnami, the ownCloud app, and its dependencies Jidoteki: ~125MB for Jidoteki, the ownCloud app, and its dependencies
Bitnami does something interesting here: they include a common directory of shared libraries, locales, and other files which use up 100MB, and which are already included with the OS. It seems they duplicated those files to maintain control of shared libraries independently of the base OS. They also include the full Apache and PHP applications, which also use nearly 100MB. Jidoteki includes a stripped down version of Nginx and PHP instead, which only require 8MB.
Verdict: Jidoteki VMs are better
Ideas for Bitnami:
1) Put more effort into trimming down the base OS. All those libraries and binaries are typically not needed, or are only needed for one specific system. Ideally, you could switch to a smaller base OS which already has everything stripped down - but that’s a lot of work. It’s typically easier to refactor than rewrite ;)
2) Consider open sourcing and distributing the actual sources of your appliances, and separating the Bitnami-specific files from the default Ubuntu files. I understand they are probably all just unmodified base files/packages from Ubuntu, but that doesn’t help me know what changed in the OS you provided.
Comparing security and updates
It seems Bitnami does a great job of keeping on top of security issues. Their changelogs appear to be automatically generated, but they are frequent and quite well up-to-date, so that’s good.
1. Updates to the base OS
Bitnami: looks like a simple apt-get upgrade combined with a Bitnami-specific upgrade process Jidoteki: Jidoteki-specific offline upgrade process
The Bitnami upgrade process seems quite simple, but a lot is left in the hands of the user. The appliance itself must be maintained by the user, requires internet for updates, and can disrupt ownCloud if something breaks. With Jidoteki, we handle the maintenance of the entire OS, along with kernel and CVE security updates. A user only needs to upload our update package to the appliance, and it will automatically update the files on the boot disk. The process is atomic and almost guaranteed to work. A reboot is required to boot the OVA with the latest version.
2. Updates to ownCloud
Bitnami: lots of manual steps and DB migration stuff (occ upgrade) Jidoteki: automatic with auto occ upgrade on reboot
We automated the entire ownCloud app upgrade process, including the DB migrations. This was necessary because ownCloud does things a bit differently. Bitnami leaves it to you with a set of outdated instructions to follow. Fingers crossed if you're successful on the first try ;)
3. Internet updates
Bitnami: yes Jidoteki: no
We aim for our appliances to run behind the firewall, and in fact they don’t even require internet at all. You can literally plug in a USB key with an update package, and then use that to perform an upgrade of a Jidoteki appliance. Of course we have a method of performing online internet updates as well, but that’s not the default. Bitnami defaults to online/internet updates because it’s more convenient. Again I think that’s not the best approach, but probably necessary considering their market.
4. “The user can fudge the appliance factor”
Bitnami: oh yes Jidoteki: nope
With Bitnami, the user has full root access to the appliance, and can pretty much do anything - including break it. Sure it’s the user’s fault if they do, but it significantly increases their support workload from people who have no idea what they’re doing. Jidoteki targets enterprise installations. We make sure to run the OS in memory only, and to have a read-only system to prevent “accidental” changes. This is by design in TinyCore Linux, which is another reason we use that as the base OS.
Verdict: Jidoteki VMs are better
Ideas for Bitnami:
1) Consider a read-only OS to prevent users from making unexpected changes. Some parts of the system can be read/write, but that should be reserved for configuration files and “data”, not libraries and binaries. If someone wants to install their own version of Apache, they should do it on their own Linux installation.
2) When updates are not atomic, bad things can happen. It seems like a repeat of the previous point, but if the user makes a small system change, and applies an update that fails, they could lose all their data - which is a terrible situation.
Comparing system management
This is the last point, thanks for sticking through to the end.
1. Network settings management through the console
Bitnami: login as bitnami, you’re on your own Jidoteki: console gui to configure basic network settings
When people first boot an appliance, the first thing they will need to do is configure the network settings, perhaps either change the IP or specify different DNS servers. Jidoteki appliances always have a simple console boot gui to change those settings - which doesn’t require a login or typing obscure commands.
2. App and other settings management
Bitnami: login as bitnami, use ctlscript.sh or other obscure commands Jidoteki: web Dashboard and REST API for system management, or obscure commands haha
Bitnami appliances don’t include a Management Dashboard, which is unfortunate. They do provide phpMyAdmin and SSH, but everything else seems to be command-line based. Jidoteki includes a user-friendly web Dashboard and a dev-friendly REST API for system management. It’s a bit limited in what it can do, but the basics are there (seeing service status, appliance usage graphs, etc). It’s sufficient for a first-run and for basic maintenance.
Verdict: Well you’ve probably figured it out by now
Conclusion
I didn’t mean this post to knock on Bitnami. Their company seems like it has a really great culture, and I love that they are supporting Open Source and simplifying access to Open Source Software. Their contributions have been valuable so far, but they need to improve in certain areas regarding their virtual machines.
Jidoteki is focused on creating rock-solid virtual appliances destined for the enterprise. We focus exclusively on doing things the correct way, with security and privacy as a top priority. Our solution is available for businesses who want to distribute their software on-premises using the best and simplest approach, along with the ability to automate appliance builds using continuous integration systems such as Jenkins.
Feel free to contact us if you’re looking to build and distribute an on-prem virtual appliance. We’ll be more than happy to discuss your requirements.
0 notes
Text
New flexible options for updating a virtual appliance
Don’t worry, we haven’t changed our update process since the last post (part 4), since we’re certain it’s the absolute best approach. On the other hand, we’ve added two new features to provide even more flexibility.
Diff update packages
Our previous update packages only had the ability to apply a binary diff to existing files. The advantage of a binary diff is the update package’s file size can be dramatically reduced, since it only contains a delta between two versions.
The problem with binary diffs is apparent when an appliance is several versions behind the latest version. Updating to the latest version would require one to apply each binary diff update, sequentially, and in order. This lead us to creating bundle updates.
Bundle update packages
A bundle update package wraps multiple binary diffs into one package. That bundle detects the version of the appliance, and then successively applies each update up to the latest version. This is great as it simplifies the task of applying multiple updates to an outdated system.
The problem with bundles is they can occasionally grow quite large, sometimes even surpassing the size of a full appliance. In that case, a different strategy is needed.
Full update packages
Our latest creation: a full update package contains the exact same files found on the boot disk of an appliance. The update process atomically overwrites each file, rather than applying a binary diff. This makes it possible to update ANY appliance with just one update package - regardless of what version it was on. And yes, it’s backwards compatible with every appliance built with atomic updates (~2015).
Of course, we use encryption and cryptographic signatures to ensure an update package can’t be applied to the wrong system (ex: company A’s update packages won’t work on company B’s appliance).
Since the update package doesn’t contain the actual VM disks, its filesize is guaranteed to always be smaller than the full appliance. It’s a great solution that fits perfectly between Diff and Bundle update packages.
Jidoteki Meta
All of this can be managed directly by the Jidoteki Meta on-prem virtual appliance. We’ve since added the ability to generate one or more types of update packages, all at once. This provides the freedom to test different update approaches, and provide various packages to customers based on their needs. The added flexibility helps ensure customers can always update their appliance down the line.
As always, we’re available to help with the creation and updating of on-prem virtual appliances. If you’re currently shipping your application as an Installer, or as a SaaS, and you want to provide the absolute best experience to your enterprise customers, then feel free to contact us, we’ll be more than happy to help.
0 notes
Text
Frequently Asked Questions about our service
Over the years, we’ve be able to compile a list of the most frequently asked questions regarding our service. This blog post has recently been summarized on our website, so I’ll provide more details here.
Business questions
Q: Why did you get rid of the Jidoteki SaaS? A: The Jidoteki SaaS we were offering was shutdown sometime in 2015, when we decided to focus on providing a Managed Service for building virtual appliances. The first and most obvious reason, was that we followed demand. Most of our customers didn’t want to, or were unable to do all the “provisioning” work on their own - that is, the step of actually configuring the appliance with Puppet/Chef/Ansible. One of the issues of hosting a SaaS for Jidoteki was that virtual appliance builds required a large amount of memory, disk space, bandwidth, and CPU. The cost was high for us, and would end up being higher for our customers. Downloading large files from our servers in Tokyo introduced latency issues as well. Finally, being a small bootstrapped company meant we needed to avoid supporting servers 24/7 - aka being on-call.
Q: Your services are not exactly cheap. Why? A: The primary reason to use Jidoteki as a managed service is to offload the most difficult part of going on-prem: creating and updating the appliance, to a team of experts. In 5 years of running this business, we’ve accumulated a plethora of knowledge which we can transfer directly to our customers, in the form of solutions to the most difficult edge-cases, and with an unbelievably fast turnaround time-to-ship. With everything we’ve built so far, depending on the complexity of a customer’s setup, we can typically have a beta virtual appliance ready in less than one week, containing all the features of a professional updateable and secure on-prem virtual appliance. It takes (and took) time to do this work, and so we charge for that.
Q: What’s included in the Setup plan? A: Other than what’s already listed on the site, the Setup plan is just a consulting contract allowing us to build your first virtual appliance (OVA) from scratch, with all the features required for you to ship to production. It also includes the Ansible scripts to build the custom rootfs (see Technical Questions below). The scripts are openly licensed to you, so you’re free to modify and re-use them as you see fit. Since the Setup plan is only for the initial OVA, you will be left to manage the updates of that OVA on your own, along with the creation of new OVAs. We can provide suggestions on how to create updates and new OVAs on your own, but it’s also a lot of work, which is why we offer the Support and Meta plans.
Q: Are the Support and Meta plans really necessary? A: A full project is typically completed in one month - from the day we start work, to the day you’re shipping to your customers. During that time, we constantly iterate on the appliance until it’s built exactly as needed. The Support and Meta plans are completely optional, and only really needed once the appliance has been released. The Support plan includes access to our team for help with updating the appliance (creating Update Packages), as well as updating dependencies such as OpenSSL, handling weird edge-cases, and adding new features such as NFS support, etc. We’ll do all that work for you under our Support plan. The Meta plan is a self-serve DIY solution, also known as Jidoteki v3. If you’re comfortable with updating the appliance dependencies, and want to host everything internally, the Jidoteki Meta appliance is the best option. It runs entirely offline, and can easily be integrated into a Continuous Integration system such as Jenkins, to automate the creation of your Update Packages and OVAs. This is a somewhat repackaged version of the original Jidoteki SaaS, although it’s quite different behind the scenes (we got rid of Ruby and NodeJS).
Technical questions
Q: What is the difference between a rootfs and an OVA? A: The rootfs is a single cpio archive built from scratch, which contains custom init scripts, custom config files, boot scripts, and other things which we build specifically for your virtual appliance. The rootfs also contains all of your app’s software dependencies (ex: nodejs and the node_modules). The rootfs is the only thing that’s really “custom” about your OVA, other than the OVF metadata file - but that’s just a detail. The OVA is just a standard tar archive which contains the OVF file, and 2 vmdk disk images. The first disk image is the bootable disk, which contains among other things: the rootfs, the unmodified base OS (TinyCore Linux), and a vanilla Linux kernel. To update an OVA, you can boot it and simply replace the existing rootfs, but that process is manual and error-prone.
Q: How is Jidoteki better than InstallAnywhere, Bitnami, Replicated, Gravitational? A: InstallAnywhere is garbage. Garbage in, garbage out. If you enjoy employing (and paying) a full-time engineer just to build your Installer or OVAs, then that software is the way to go, but you’ll deeply regret it in the short and long-term. Bitnami is focused more on creating Open Source stacks - which is admirable since we love Open Source - but their process is not aimed at building secure, hardened, on-prem, small-footprint, easy-to-update appliances the way ours are built. Case in point: their base OS is Ubuntu. Replicated and Gravitational are not really in the same category as us, since they chose to redefine the term “on-premises” to mean “on Amazon’s or Google’s cloud”. They do seem to provide some nice features for management of appliances, and we respect them for that, but their solutions include a hidden "feature”: vendor lock-in. Choosing to work with those closed-source SaaS platforms locks you into their technology, and their vendor platforms, and their pricing models. Finally, they both focus on Docker at the base of the system. If you’re not using Docker and don’t care for it, then you will be forced to use it with them. What we provide - which is a much more sane solution - is a 100% open source platform for your appliance, without Docker, and with a very tiny OS to boot. Every single tool, script and software included in your appliances is open sourced. This allows you to validate that it does what it’s supposed to do (no telemetry or analytics phoning home, no timebombs or illegal data logging). It also provides complete freedom to modify your solution as you see fit, as opposed to being locked to our way of doing things. That in itself is the most valuable tool for any business. Our Jidoteki Meta appliance does contain some closed-source software, but none of it is obfuscated (we distribute the sources), and as our customer, you’re more than welcome to look at the code and suggest improvements/bug fixes if you want. Again this benefit is a long-term one, where even if you don’t “want” to modify our code, if we happen to stop working on Jidoteki Meta, you “can” continue using it.
Q: Why do you dislike Docker and systemd? A: Garbage in, garbage out.
Q: How do you handle updates when there’s no internet? A: I don’t want to say it’s our secret sauce, because we’ve blogged about it here, here, here, and here. Essentially, by running the OS entirely in memory, it allows us to atomically update the system’s OS, kernel, and custom rootfs, without affecting the running system. A reboot is required to “activate” the changes, but once that’s done, they are up and running with a fully updated system. To handle offline updates, we package the update files into an encrypted tar package, and distribute that. The package can be used to fully update the system to the latest version, without requiring internet access. If an update happens to fail, it is automatically reverted and “nothing changes” to the running system.
Q: How do you handle database migrations? A: Since the OS is only “updated” on reboot, all DB migrations should run on boot. We use shmig to handle the DB migrations after a reboot, and it has worked flawlessly for us since the start. It’s your job to ensure the DB migration scripts don’t fail when run. Unfortunately that’s not our area of expertise, but the following answer might help.
Q: Can you revert a failed migration? A: Of course we can! By default, all our appliances ship with a second hard disk for storing the database and other persistent data. We ensure to include LVM, to allow you to perform a database snapshot right before running the database migration. If the migration fails, you can simply revert the snapshot and go back to a working state. In any case, if a migration fails and leaves the DB in a bad state where your application won’t start, the included (open source) Jidoteki Admin tools will allow your customers to either upload a new Update Package, to debug the appliance, and even login via SSH to perform manual system administration.
Q: Can you do hardware builds? A: Yes! We also have a set of Ansible scripts to build a system for the Beagle Bone Black and Raspberry Pi family. The final system is almost identical to a virtual appliance, the only difference is rather than having a custom rootfs layered on top of the unmodified OS, we’re forced to merge them all into one rootfs (because the ARM bootloader can’t chainload them). Of course, the binaries are all compiled for the armv7 architecture instead of x86 or x86_64.
Q: What’s the typical size of a final OVA and Update Package? A: Our latest Jidoteki Meta appliance is 57MB, and Update Packages can vary wildly, but typically between 100KB and 5MB. The Update packages contain binary diffs of the previous OVA files, which allows them to be much smaller than a full OVA. Of course, if you’re updating a Java application and constantly making changes to the file/directory structure, adding new dependencies, etc, then Update Packages can grow quite large - to multiple hundred MBs or more. There’s no rule of thumb for the OVA size, but our default appliance with all the base dependencies, kernel, etc end up at roughly 35MB. Everything else will depend on your application size, and its dependencies.
Misc questions
Q: What does Jidoteki mean? A: In Japanese it’s written 自動的(じどうてき)which means “automatically”. Our plan has always been to automate the build of virtual appliances, through scripting, continuous integration, etc, so we thought this name was apt even though it sounds a bit weird. We’re also based in Japan.
Q: Is Jidoteki Meta going to be open sourced? A: Yes, we plan to fully open source it eventually. For the moment we’re releasing parts of it at a time, but we haven’t written full documentation or tests for all the tools yet. It would be irresponsible of us to open source something with no tests and no documentation ;) We would like to fund development of Jidoteki Meta by selling it first, as a “partially” closed source solution, and within the next year or two once the product is mature, we would be happy to release it publicly for the community to enjoy and use/maintain collectively.
Q: What other benefit(s) do you provide? A: The greatest benefit is that you can focus exclusively on your software, while we (experts) handle your on-prem builds and setup. This has an important impact on your engineering team, since you don’t need to dedicate any resources to something that’s not core to your business. This translates to huge cost savings. Moreover, with your newfound ability to quickly iterate on your appliance and updates, you can now quickly provide updates for your customers, new versions, and new features - which translates to increased revenues. In the end, the cost of working with us will have an important positive financial impact on your business, and as long as that’s true, it’s win-win for both of us.
I hope this long and detailed FAQ answers questions which many of you may already have regarding building on-premise virtual appliances. If you’re interested in getting started with Jidoteki as a managed service, please feel free to contact us, and we’ll be happy to discuss your requirements and see how we can help. Thanks for reading!
0 notes