#picolisp
Explore tagged Tumblr posts
Text
Picolisp a Railroad Simulation
https://picolisp-explored.com/a-railroad-simulation-with-des
2 notes
·
View notes
Text
Inside Jidometa: Caching JSON responses
[Read part 1 of Inside Jidometa]
We recently implemented a feature which greatly improves the browser experience in Jidoteki Meta (Jidometa).
In the past...
The Jidoteki Meta UI is a simple JavaScript layer on top of a PicoLisp REST API. The API returns JSON responses for every successful request, but occasionally we would see rather slow navigation between sections of the UI.
Certain responses tend to take more time as the size of data increases. There is no way around this, since many tables and entries need to be queried on every request, or so we thought..
The present...
To “solve” this problem, we decided caching responses for common GET requests would be a good approach.
Our idea was to only cache the most queried items (the list of all builds, and the details for each build). By caching this data, it makes it possible to quickly navigate between different builds and the full list of builds, without making any calls to the API.
Caching gotchas
Before implementing any caching solution, it’s important to fully understand the gotchas that accompany it. In most cases, the most important requirement is to never serve stale data. To invalidate the cache requires us to ensure any cached elements which are affected by a change, will immediately be evicted from the cache.
Another gotcha, is avoiding the temptation to add yet another layer of complexity to the stack. We always aim to do things as simply as possible with the least amount of software and overhead.
We needed the ability to easily disable caching or wipe the cache if needed, and we needed to ensure the browser’s own caching features didn’t interfere with ours.
Caching as simply as possible
Since our Jidometa virtual appliance (as well as our customer appliances) run entirely in memory, it was easy for us to assign a temporary directory for storing cached responses.
We store responses in /tmp/jidometa/cache/ - which is not actually “on disk”, but rather an in-memory filesystem (tmpfs). Those responses are regenerated on every GET request to the API. Fortunately there’s no overhead for that.
Each cached response is a simple .json file which is absolutely identical to what would have been served by the API. The files are named according to the build ID they represent, so there’s only ever one cached response per build.
It’s hard to get any simpler than that.
Serving the cached responses
We use Nginx as a frontend for static files and the API. Below are examples of the two Nginx directives we added for serving the above cached files:
location /cache/ { add_header Cache-Control no-cache; alias /tmp/jidometa/cache/; try_files $uri @nocache; }
location @nocache { rewrite /cache/builds/(.*)-details.json /builds/$1/details break; rewrite /cache/builds.json /builds break; proxy_pass http://jidometa; }
As an example, a request will first try to serve the /cache/builds.json from /tmp/jidometa/cache/builds.json - if the file doesn’t exist, it tries the @nocache directive. That directive will then send the request to /builds on the API, which will return a fresh response (and generate a new cached builds.json).
We also send the Cache-Control: no-cache HTTP header to ensure browsers don’t cache the cached response... but here’s the thing, browsers WILL cache the response regardless, except the no-cache directive will force the browser to use the Last-Modified or ETag headers in future requests, to revalidate whether the cached entry was changed or not. Well, assuming browsers handle those headers according to the RFC specs ;)
If the cached file hasn’t changed, the browser won’t fetch the entire file again.
Invalidating the cache
Builds are linked to their parent build (if any), as well as their child builds (if any).
When a certain build is modified (ex: the build’s status is changed from unreleased -> released), its cached response needs to be invalidated. We decided the simplest and easiest way to invalidate a cached entry is to delete the build’s cached response, and all the other responses linked to it (its parent and children).
Here’s the code we use for invalidation (PicoLisp):
(de remove-cached-file (Filename) (call 'rm "-f" (pack "/tmp/jidometa/cache/" Filename)) )
(de remove-cached-build (Build) (remove-cached-file (pack "builds/" Build "-details.json")) )
(de remove-all-cached-builds (Builds) (mapcar '((N) (remove-cached-build (; N 1)) ) Builds ) )
A call to (remove-all-cached-builds) with the list of Builds as the only argument, will handle removing each cached response individually. What’s nice is we can also use (remove-cached-file “builds.json”ˆ) to remove the entire builds list from the cache.
Of course, we wrapped all those calls in a simple (invalidate-cache) function which can be called anywhere in our code; anywhere that updates an existing build, or inserts a new one.
If a build has one parent and three child builds, then a total of six cached responses will be removed from /tmp/jidometa/cache/, including the builds.json (list of all builds). This is quite an aggressive cache invalidation strategy, but it ensures there are never any stale responses, and a new cached response will be regenerated as soon as a request is made for the “missing” file. Easy.
Issues with caching
We have yet to encounter any issues with cached responses, but if they do occur, it’s quite simple to identify exactly which response is problematic, as well as delete it manually from the cache directory. It’s also simple to completely wipe the cache from the system and start over from scratch. This happens after reboot anyways.
The main issue with a fresh cache is the first page load will be slightly slower than the others. It might take a few clicks to “warm” the cache with fresh responses, but considering the use-case for Jidometa (on-prem virtual appliance), we can live with that. Had there been a requirement to support hundreds of concurrent users, we would have considered an in-memory cache with persistence, such as Redis... maybe.
In conclusion
I would go into further detail, but really there’s nothing more to it. Serving cached JSON responses the way we do is somewhat of a “poor man’s solution”, but the lack of overhead and the simplicity of the implementation are a net win for everyone.
Stay tuned for part 3, where we’ll provide more details regarding the Jidometa internals.
0 notes
Text
Vera May, symbolic analog multimedia erudite "OS".
Another WIP in the blog, sorry about the abundance of these, I simply do my best to exercise my duty of creative writing for my manifestation & life-long learning / edutainment purposes. Enjoy!
( Shoshoni Union of Republics, Commune of Samoa, Unitary Republic of Sumer, Inuit Assembly of Nations, Heavenly Empire of Vietnam, Federation of Brazil, Theocratic State of the Maya, Dominion of Persia, Tsardom of Poland, Realm of Assyria, Imperium of Portugal, Realm of Babylon + Akkad; )
Neue Maxima Distributed Interactive System (NM-DIS)
for Symbolic & Analog Perseus Data-Processors and other architectures...
True, Libre and Open Source (permissive license? or as most likely a public domain waiver)
macSOS (sophisticated machine operating system)-like GUI with CDE VUE [KDE Plasma Liquid variant], so dock on bottom and bar on top, yet extremely customizable theming parameters
Fish-like shell with ZealOS & Parade command additions
Programming with a JIT scripting compiler using Nim (looking towards ZealC support?), Macroware's Fidel (Microsoft's F#), Utalics' Paco (PicoLisp + Tiny FreeBASIC?) and GNU Common Lisp (with CLOS and secd?)
Multilingual word processing office utilities (Notion/Obsidian/Gollum/LotusScript-likes), multimedia designer tools (K3B, Okteta, GIMP, Krita, G'MIC, Blender, Inkscape, SweetHome3D, FreeCAD...) & video software toys (Reichbürger, BUILD2, FreeCiv, Evennia, Star Traders, Cataclysm Bright Days Ahead, Qodot, Hammer++, Portal Stories Mel, Lightfall, Prospero, A Mind Forever Voyaging, Hunt Showdown, Rainworld, GOG Galaxy, Itch.io, Unreal Engine 5...)
Using a 12-bit data word as most basic addressing unit as per the specifications of my bytecode virtual sandbox interpreter environment (also has twelve major generic registers of 48-bit length each, so quite 64-bit RISC-V + DEC Alpha-tier)
CLADOgrams filesystem + distributed file-server with buffers
Linux stable "Zen"-branch Nucleus kernel
Siemens' DIS (ITS+DOS), Pflaumen's COS-360 (COS-310) & EBM's SASS (AIX with CDE) utilities
Perseus standard-compliant (POSIX) sub-systems & program agents (ala Shimeji-ees and Microsoft BOB)
AGAS (OGAS) + DirectXanadu (OpenXanadu) networking protocols
PhantomVSO (PhantomOS) persistence and safe data mutability
Vandex (Yandex) curated research engine & hosting services (Geocities)?
VastTiger web browser (Konqueror + Falkon + Firefox + LibreWolf)
Nemo (regular tabbed file manager)
Konsole (light terminal emulator)
Kate (advanced text editor)
Kardfile (interactive debugger and expansive disk editor)...
Soft solarpunk toons (dark + light) adaptive theme parameters by default
Screenshots
[?]
Screencasts
[?]
More information
?
0 notes
Text
New top story on Hacker News: Build Android Apps in PicoLisp Without an Android SDK
Build Android Apps in PicoLisp Without an Android SDK 87 by homarp | 30 comments from Blogger https://ift.tt/2zYscPg
0 notes
Text
Everything Picolisp can do, and more
https://picolisp.com/wiki/?Documentation
0 notes
Text
Inside Jidometa: Concurrency with Mosquitto (MQTT) and SQLite
We’re beginning a series of articles to discuss the Jidometa internals.
We want to begin by talking about how Jidometa handles concurrency.
In the past...
Our initial SaaS service - Jidoteki - was built using Ruby, Redis, and Resque for concurrent builds. We quickly ran into issues when aiming to scale beyond one server. Those technologies were definitely great, but quite limiting in our ability to build across a cluster of servers (without implementing our own wacky scaling scheme - no thanks).
We replaced Resque with RabbitMQ, which served us extremely well for a few years. The wonderful administration interface, its somewhat low overhead, and the fact that all our apps (ruby, nodejs, bash) could easily talk to it were a plus.
The present..
With our move from SaaS to On-Prem appliance (Jidometa), we realized RabbitMQ would be overkill for such a deployment. We wanted to build an appliance with minimal dependencies, and a tiny footprint.
We chose to replace Redis and RabbitMQ, with SQLite and Mosquitto (MQTT).
Weird stack
I haven’t found any public articles on doing concurrency in SQLite using Mosquitto, so it was an adventure attempting to do it without guidance. Perhaps our stack is weird, considering we mostly write our code in PicoLisp now, but since we were already quite familiar with it, the decision was simple.
As many know, SQLite on its own is not ideal for write concurrency. Some best practices include:
setting “PRAGMA journal_mode = wal;”
setting an initial connection timeout to a ridiculously high number (we set ours to 20 minutes)
simply NOT doing concurrent writes. Hah!
Doing concurrent writes
To solve the problem of concurrent writes in SQLite, we opted to serialize our database writes over an MQTT “queue”. I quote queue because it’s not technically a queue, although with the correct settings, it can be (mosquitto.conf):
allow_duplicate_messages false
max_inflight_messages 1
max_queued_messages 0
persistence true
Those are some options which will allow you to have a safe, FIFO-based MQTT “queue”. Since database writes are really important, it’s necessary to use QoS 2 on both ends (publisher and subscriber).
Our publisher and subscriber
This is where PicoLisp comes in. At the moment we’re not using a native library to access SQLite from within PicoLisp. We’re using a much less efficient system command: (call ‘sqlite3).
We wrote a subscriber daemon which listens for messages on the mosquitto/MQTT queue, and then processes the messages, validates the data, and performs the DB write. The messages can arrive concurrently, but only one message will ever be processed at a time - thus enabling serial DB writes. You can think of it as Unicorn with only 1 worker.
On the other end, the Jidometa API and backend scripts publish messages to the queue whenever something needs to be processed and written to the database. Those published messages can occur concurrently, at very high rates, without worrying about messages being missed, lost, or dropped (since it’s all occurring locally over localhost).
All DB reads are made directly to the SQLite database file, since reads can be concurrent and won’t affect the database in any way.
Big advantages
The added benefit of enabling MQTT behind the scenes, is the mosquitto server also ships with native built-in Websocks and TLS support.
We’ve opted to use that for reading and publishing build log messages, status updates, and other things which update the browser. On the browser side, we’re using a slightly modified version of the Paho MQTT JavaScript library. Our modifications include the ability to store messages in cookies, and making LocalStorage optional.
Moreover, we can easily extend the worker to perform other tasks when it receives certain messages, such as running integration scripts, performing basic maintenance, sending out alerts..
No locked database
Since implementing our concurrency in sqlite using mqtt, we’ve eliminated “database is locked” errors and are able to use the tiny file-based database the way we want.
Of course, we could have avoided this by using a different relational database (ex: PostgreSQL), but it wouldn’t have given us the wonderful properties of a tiny footprint, Websockets, and MQTT.
See part 2, where we provide more details regarding the Jidometa internals.
0 notes
Link
PicoLisp 181 by kqr | 34 comments on Hacker News.
0 notes
Text
RT @ioa_27: Calling all lispers of all dialects! Tomorrow is the first of the new series online-lisp-meets, organised by Michał "phoe" Herda. Join us at 18:00 CEST. Info: https://t.co/SWaTZqPIj2 #lisp #scheme #racket #commonlisp #clojure #elisp #dylan #picolisp #acl2 and all the lisps! 🥰
Calling all lispers of all dialects! Tomorrow is the first of the new series online-lisp-meets, organised by Michał "phoe" Herda. Join us at 18:00 CEST. Info: https://t.co/SWaTZqPIj2#lisp #scheme #racket #commonlisp #clojure #elisp #dylan #picolisp #acl2 and all the lisps! 🥰
— Ioanna M. Dimitriou H. (@ioa_27) May 11, 2020
from Twitter https://twitter.com/ivanpierre May 11, 2020 at 10:31PM via IFTTT
0 notes
Text
New top story on Hacker News: PicoLisp
New top story on Hacker News: PicoLisp
PicoLisp 194 by kqr | 36 comments on Hacker News.
https://ClusterAssets.tk
View On WordPress
0 notes
Link
0 notes
Link
Build your Apps in PicoLisp without an Android SDK PilBox ("PicoLisp Box") is a generic Android App which allows one to write Apps in pure PicoLisp, without touching Java, and without the need of an Android SDK. You do not need to root your device. And - if you prefer - you do not need a separate development machine (PC or laptop): All can be done in a terminal on the device, and even in a Lisp REPL while the App is running. Note: PilBox needs Android >= 5.0!
2 notes
·
View notes
Text
New top story on Hacker News: Build Android Apps in PicoLisp Without an Android SDK
Build Android Apps in PicoLisp Without an Android SDK 87 by homarp | 30 comments from Blogger https://ift.tt/2zYscPg
0 notes
Text
Inside Jidometa: A look at our Open Source Software
[Read part 2 of Inside Jidometa]
In this post, we’ll highlight some of the open source tools we’ve created and deploy in every Jidoteki appliance, including Jidometa itself.
The OS
We'll start from the buttom up. Jidometa is built on top of TinyCore Linux - a small footprint in-memory operating system built on GNU/Linux. We use the unmodified OS with a few minor changes.
The sources for the OS toolchain
The sources for the OS busybox
The TinyCore OS initramfs and kernel
The TinyCore OS scripts modifications
The kernel
TinyCore Linux ships with a slightly modified Linux kernel. In our tests, we were able to deploy our appliances with a completely unmodified (vanilla) Linux kernel, so we provide the full unmodified kernel sources.
The kernel build scripts
The sources for the Linux kernel
The extensions
Extensions are similar to Debian .deb packages and RedHat .rpm packages. In Jidometa, they are squashfs .tcz files which contain pre-compiled and stripped binaries. They're typically much smaller because we don't include man pages, headers, and other files not needed in an immutable OS.
The sources for the extensions
We have yet to publish all the build scripts for our extensions, but they all use basic commands:
./configure;make;make install
Anyone can easily rebuild them with the original sources.
The admin scripts
We always include our own Open Source scripts to help manage the appliance. The scripts vary in importance and are either written in POSIX shell, or PicoLisp. We're slowly working on replacing all our POSIX shell scripts with PicoLisp shell scripts.
The Jidoteki Admin scripts, which manage the appliance from the console
The Jidoteki Admin API scripts, which manage the appliance from a REST API
The helper scripts, which provide additional functionality to the appliance scripts
We have quite a few more administration scripts, but we haven't open sourced them yet.
The libraries
The Admin scripts are built on top of a set of open source PicoLisp libraries which provide the foundation for stable, tested, and functional tools. The libraries include:
A unit testing library, to help write simple unit tests for every PicoLisp script and library, and ensure correct functionality (as well as reduce bugs/regressions)
A JSON library, to natively parse and generate JSON documents directly in PicoLisp
A SemVer library, to help manage, compare, and validate appliance and update package versions
An API library, to help build simple REST APIs as quickly and easily as possible
The licenses
It's almost impossible to gather every single license for every single software used in the appliance. We provided the software source packages intact, which include all the unmodified license files as well.
Additional licenses can be found directly in the appliance in:
/usr/share/doc/License/
The ISOs
Since downloading all individual source packages can be quite troublesome at times, we also provide direct links to download ISOs which bundle all the sources together.
Fauxpen source
We're not running a fauxpen source operation here. Admittedly not all our code is Open Source, but we do our best to comply with the GNU and OSD licenses by making it easy for everyone to access the sources files for the appliances and binaries we distribute.
We're constantly releasing new open source tools and libraries, so make sure you sign-up for our mailing list to stay up-to-date on our work, tools, and solutions.
If you're looking to provide your customers with an on-premises virtual appliance, without being locked-in by your vendor, then contact us and we'll be happy to discuss your requirements.
0 notes
Photo
You know something is serious if it casually cites from “The Art of Programming”.
0 notes