sc3d-blog
sc3d-blog
a long view
23 posts
Reubenical ruminations from 2.6 parsecs out. Home
Don't wanna be here? Send us removal request.
sc3d-blog · 2 years ago
Text
Back home
This blog on Tumblr never really fulfilled its purpose, which was to allow commenting once my personal website ceased to be a wiki.
So, I will make no further posts here.
See my web site for future (and past) posts.
0 notes
sc3d-blog · 8 years ago
Text
Bookshore: a library is a liminal thing
In French, “librarie” means bookshop, a usage that seems prophetic now that so many small (and not-so-small) Amazon booksellers’ stock consists of what would once have been their owners’ libraries. The books are become interchangeable, still fulfilling their function as extra insulation, but with their old secondary function of entertainment and enlightenment replaced by that of enrichment. (I imagine that some particularly cleverly-balanced people combine reading with commerce, but not many; the Amazon stock at Thomas Towers is pre-packed for the post and stored in the attic, as much to protect the rest of the collection from abduction as to grease the wheels of commerce.)
I’ve spent the last couple of days overhauling my browser bookmarks, of which I have some 1,300 (coincidentally, when I catalogued the family book collection as a teenager, it amounted to roughly that number of volumes). I filed a hundred or so amassed over the past four or five years, and combed through 300 identified as broken links, updating them where possible, in some cases pointing them at the astonishing Wayback Machine (at least an important a public service as Wikipedia), and in a few cases lamenting their utter disappearance.
But the sort of link I’m interested in right now are the sort I simply deleted: links I no longer needed. Later, I realised that I’ve started treating my browser bookmarks the way I treat my book collection: as a boundary, a shoreline.
The hinterland is the contents of my brain: what I’ve learned, can recall, can rely on. A few mines of information are represented on my shelves, and in my bookmarks, by reference works: dictionaries (actually, I mostly have these on my phone now, as vadite mecum and for speed of use) and other essential works; online equivalents are mostly esoteric, as the ones I use all the time need no bookmark (Google, Wikipedia &c.).
The ocean is everything I do not know, will never know, have no wish to know.
And the shoreline: that is where the books and bookmarks are littered. Everything I am interested in, the pebbles of knowledge, the seaweed of YouTube & thrillers, the curious creatures of mine and others’ unknown selves. Here, my physical library is rather skewed, as thanks to the unconscionable length of modern copyright combined with the irresistible collection left by my maternal grandparents, it consists mostly of books I have or would like to read once, but must keep in case, as the former would be too expensive to replace, and the latter to acquire. At least the classics are safely removed to the still-expanding Project Gutenberg, which should surely replace the Bible-and-a-Shakespeare as standard issue on Desert Island Discs.
But the rest is largely a matter of memory: if I might forget it, I bookmark it. If I don’t know it, I have the book. If I’m simply not interested, why give it window or wall space? Disk space is another thing altogether: my computers are full of data I will probably never need, but, unlike with books, the probability can be effectively zero, as that is the chance I’ll ever fill my disk with text. That’s not the same as bookmarking everything, as bookmarks take effort to navigate, and hence occupy brain space.
So I suggest that, where of old a library might have been merely indicative of its owner’s interests and knowledge, much still gained from other sources, or simply on subjects not treated by the published word; or, more recently, a solid image of its owner, books representing reading, actual or aspirational; now, it’s more likely to constitute an shoreline, knowledge behind and ignorance ahead, and hopefully the tide is going out!
0 notes
sc3d-blog · 12 years ago
Text
One step forward and two back: Alexander’s early and Pallasmaa’s late work
A comparison of “Notes on the Synthesis of Form” by Christopher Alexander and “The Thinking Hand” by Juhani Pallasmaa
“Notes on the Synthesis of Form” was the first monograph, published in 1964, by Christopher Alexander, the Vienna-born British architect who first studied mathematics at Cambridge University and then spent most of his career at the University of California at Berkeley. “The Thinking Hand” was written in 2009 by Juhani Pallasmaa, the Finnish architect and long-time professor of architecture at the Helsinki University of Technology. They were born within a few weeks of each other in 1936. Both have undertaken major projects, but while Pallasmaa’s look familiar to student’s of modern architecture, Alexander’s are idiosyncratic and widely dismissed by his peers, though as a theorist he has been influential. (My copy of “Notes on the Synthesis of Form” comes from Upper Iowa University Library, and the return card shows its having been taken out only once, in 1968.) Alexander has had considerable influence in computer science: both “Notes on the Synthesis of Form” and his later “A Pattern Language” have shaped developments in programming languages and techniques.
Art vs craft
What particularly fascinates me about these books and their authors is that a summary of their arguments gives completely the opposite impression to the character of their authors’ works. Pallasmaa calls for a reconnection with embodied thinking in an era that has become too visual and virtual, while Alexander demands a formal approach to design in a world that has become too complex for intuitive approaches; but it is Pallasmaa’s architecture that is modern and Alexander’s that is traditional.
The key to this apparent paradox is in the authors’ characters: Pallasmaa is unabashedly modern in his insistence on the architect’s central importance as a visionary artist–engineer, while Alexander is much more cautious: he argues that architects, and designers in general, largely fail to cope with the problems with which they are faced: their “intuitive ability to organize physical form is…reduced to nothing by the size of the tasks”, and that instead they “hide [their] incompetence in a frenzy of artistic individuality”.
The respective orientations pervade and structure the books. Pallasmaa draws largely on other artists for his inspiration, and takes a thematic approach, with chapters including “The Mysterious Hand”, “The Working Hand”, and “Embodied Thinking”. He illuminates his argument with copious illustrations and fulsome references, with a full page of endnotes at the end of several of the short chapters (eight in 140 pages, with generous margins and the afore-mentioned frequent illustrations). Alexander by contrast offers a programme, and divides his 200-page volume into three main parts: first, an analysis of the problem, with chapters that define the design problem and the traditional “unselfconscious” and modern “selfconscious” design processes; secondly, an exposition of his formal analytic–synthetic process based on the extraction of a “program” or decomposition of the problem from formal–functional “constructive diagrams”; and thirdly two appendices which respectively give an extended example of the approach and the mathematical justification of the formal process. He draws on a similarly wide range of sources, overwhelmingly scientific, from disciplines as diverse as biology, mathematics and anthropology.
In summary, Alexander’s approach emphasizes science and craft, while largely taking artistry for granted (when it’s not part of the problem), while Pallasmaa insists on the need for an embodied artistic vision.
Conscious vs unconscious: a step forward
Both authors observe the limitations of abstract intellectual effort, but for different reasons and in different ways. Alexander defines the problem of design as one of dividing an “ensemble” into “form” and “context”, and then designing the form to as to ensure “good fit” between the two parts. He then observes that in traditional societies design is an “unselfconscious process”, that is, neither codified nor formally taught, but rather encoded in the patterns of the society and its objects. Crucially, he says, the learned skills consist simply of attempting to correct “bad fit”. A maker who has come across the problem before may use a learned solution; otherwise, a random change may be made, and effective solutions may become part of the tradition. Alexander does not mention natural selection, but in fact this is what he is describing. As in nature, it is tremendously powerful, and matches the structure of the design problem itself. It is impossible to give an exhaustive list of what constitutes “good fit”, but only a partial list of “misfits” that have arisen in past experience: the tradition consists of a series of adaptations to past problems.
For the unselfconscious process to work, two conditions must be met: the design problem must be decomposable into problems that can be solved separately, and neither the culture nor the physical environment must change too quickly to allow the tradition to reach an equilibrium of good fit. The need for selfconscious design in the modern world has arisen, Alexander says, because both conditions have been broken: society has become too complex and changes too fast for unselfconscious processes to work (though there are also counter-forces which exacerbate the problem, for example, “buildings are more permanent”). (It might be interesting to reflect on cause and effect here, in particular, whether unselfconscious processes actually broke down, or whether they were abandoned for other reasons as modern society developed, but Alexander doesn’t; however, his later work such as “A Pattern Language” strongly suggests that he reconsidered the applicability of unselfconscious processes to the modern world, which given the successes of free-market capitalism seems only sensible.)
Alexander claims that selfconscious design doesn’t work at present because of a combination of sheer complexity (the classic “five plus or minus two” phenomenon) and the human tendency to analyse in linguistic terms which don’t fit reality. His solution is to boost human cognitive abilities with formal methods. He describes this as a loss of innocence, but says that “whether we decide to stand for or against pure intuition as a method, we must do so for reasons which can be discussed”. He contrasts this attitude with designers who “insist that design must be a purely intuitive process: that it is hopeless to try and understand it”.
The defence of silence: a step back
Pallasmaa implicitly contradicts this view by insisting on the primacy of embodied wisdom over intellect: “I cannot perhaps intellectually analyse…what is wrong with my work during the design…process”. Yet later he says: “In my view, the discipline of architecture has to be grounded on a trinity of conceptual analysis, the making of architecture, and experiencing…it”. He asserts that “creativity is always linked with the happy moment when conscious control can be forgotten”, yet later that “great artists…emphasize the role of restrictions and constraints”. These positions can be reconciled: the restrictions and constraints must be internalized so that they are no longer conscious. It is then possible to imagine using Alexander’s formal techniques as a framework within which to design, while the actual design work is carried on in Pallasmaa’s embodied, intuitive mode.
Pallasmaa, however, concentrates on the embodied mode of thought, which he sees as neglected, and does little to show its place in the larger picture. This is a pity, because this emphasis unbalances the picture he paints: rather like the hand itself, which contains no muscles, divorced from the body, his argument, by paying insufficient attention to its context, fails to persuade. It suffers itself from a frequent lack of connection: despite an approving quotation from Berger describing van Gogh at work, in which “the gestures come from his hand, his wrist, his arm”, at no point does Pallasmaa explicitly acknowledge the hand’s dependence as a mechanical component; and similarly, though he quotes Heidegger saying “only a being who can speak, that is, think, can have hands”, he repeatedly gives the impression that he believes the hand can literally think, separately from the brain. Again, he fails to acknowledge the irony, having quoted Henry Moore on the danger of analysing creative work, of his reliance on artists who have no such qualms.
This lack of selfconsciousness is precisely what Alexander warns against. Pallasmaa asserts that “an established and successful professional would hardly stop to ponder questions such as, what is the floor, the window, or the door”, exactly what Alexander has spent his career doing. Pallasmaa says “the true artist…collaborates with the silent tradition of the craft”, but fails to acknowledge the problems with this position, or indeed, his lack of balance in concentrating on the artistic side of architecture.
In his penultimate paragraph, Pallasmaa says, beautifully, “architecture has to slow down experience, halt time, and…maintain and defend silence”. This is true, though it is no less the task of every designer, and each person who would live aright. It seems, though, that Pallasmaa has confused means and ends: his defence of silence has become a silent defence.
Now and then: another step back
The 1960s was in many ways a more optimistic time than now: the world was a larger place, technology was less powerful and our knowledge relatively undeveloped, but there was a greater sense of progress in the face of more tractable problems. The limits to growth were yet to appear. Alexander’s notes are very much in the spirit of the times, using the latest research to propose a way to address the problems of the day. Pallasmaa’s work is also arguably very much in the spirit of the times in its more personal, inward focus, and its more partial gaze; but in the fifty years that separate the books not only our problems but our ability to solve them are vastly greater, which makes Pallasmaa’s work look at best pessimistic or unambitious, and at worst out of touch navel-gazing. “The Thinking Hand” is elegiac and beautiful in places, but its call to mysticism, often shrouded in academic turns of phrase, does the profession no favours, while the occasional overconfidence of the young author of “Notes on the Synthesis of Form” is excused by his directness, vitality and enthusiasm.
Where I live, in London, the problems are only getting worse: the prestige projects are now overwhelmingly for private clients, the public purse is stretched as never before (regulations for schools were recently relaxed to make it possible to build them smaller), as the state continues to abdicate its traditional client roles. The result is a generation of architects whose interests lie in exclusive engagement with the rich and powerful, to design investment fortresses that are privately-owned, privately inhabited (or, in many cases, uninhabited) investment vehicles which continue to swallow up previously public land. Public use increasingly means shopping, which excludes those without the means to over-consume, and so the vast majority of citizens are left as powerless spectators of the urban landscape, unable to affect or afford anything, at best gawp at the attractions and buy something in the gift shops of this enormous private gallery.
Under these conditions, Pallasmaa’s response is an entirely understandable one from an evidently humane person, but it is Alexander that speaks to our need.
Thanks to Thomas Impiglia and Donna Mairi Macfayden respectively for recommending the books to me.
0 notes
sc3d-blog · 12 years ago
Text
“Speaking Code” by Geoff Cox and Alex McLean
Subtitled “Coding as aesthetic and political expression”, I found this book in Blackwell’s in Oxford, and it instantly appealed as one who has made his living primarily from voice and code. The colourful cover also appeals, and pleasingly it too is a program, in the colour-notated language Piet.
The book’s subtitle is “Coding as aesthetic and political expression”, and the book sets out to connect coding to voice, and to analyse how the advent of ubiquitous computing affects notions of action, work, voice and speech.
For someone with little background in the relevant political, philosophical and literary background, which as far as I can tell rests primarily on Marx and Arendt, and more recently Virno, with a liberal sprinkling of French post-modern philosophers and the ever-present Žižek, Cox’s language, references and style are all hard work, and from a left-wing humanities culture I’ve been conditioned to be suspicious of. But the functional code (mostly McLean’s; there is also code poetry, which is fun but, I think, less interesting) is clear, playful and technically thought-provoking. One example patches the Linux kernel to make the machine slow down when it is busy; another tries to say “hello” to every server on the internet (since it’s IPv4, this could be read as a “Last Chance to See” style of greeting!), a sort of polite and manic code-cousin of Wowbagger the Infinitely Prolonged. In the first case, it’s interesting to think of the implications of making a machine behave more like a human; in the second, to wonder how likely one would be to fall under suspicion, or even arrest, for running an apparently harmless script. Other hacks attempt to follow all your followers’ followers on Twitter, or to defriend all your friends while inviting them to meet each other in the flesh.
We are treated to a deep dive into the Hofstadterian strange loop that code sets up between speech and action, described in the introductory chapter 0 as “double coding”, the difference between what humans and machines make of a program, not forgetting its comments.
There are some interesting analyses: Mechanical Turk is obviously ripe for political analysis even to this naïf, while there’s a powerful critique of proprietary cloud-based services which is all the more striking by the fact that it emerges from principled argument rather than the more pragmatic starting point of projects like Freedom Box. Yet my final impression is one of mild disappointment: there are plenty of sources I can now go and read, yet not only does the book not set out a programme (reasonably enough, as it’s an academic, not a polemic, work), but I didn’t reach the end feeling I’d read a compelling analysis or synthesis. Many apparently important sentences are either too vague to be sure of their meaning, or appear to include important misunderstandings without justification: describing JavaScript as “proprietary, indeed owned by Google”, or the operations of conjunction and disjunction as “complex” are just two glaring examples. Other passages which were unclear to me seemed to rely on a background I lack, which is a pity given that one might hope politically-ignorant “codeworkers” such as myself might form a significant portion of the book’s audience.
As a result, the final impression that the book is both too long for the limited message it does convey, and too short for readers lacking background in one of its two sides (there are plenty of unexplained technical references that will baffle those without a considerable grasp of computing). This is a pity, as the authors clearly have both the knowledge of and sympathy towards both sides of the subject required to write a compelling book, and education in this area is sorely needed, both by the mass of codeworkers who are by and large politically inert despite being highly educated; and, perhaps to a lesser degree, by political activists who despite being technically savvy have perhaps failed to grasp quite how fundamentally computing has changed the nature of the game.
Ridiculously for a book in MIT Press’s “Software Studies” series, and doubly for a book that contains source code, there is no electronic version or accompanying web site. Many of the sites referred to in the book seem to suffer from the same insouciance towards preservation, which seems oddly prevalent among digital artists given the lengths to which more traditional workers go to preserve their œuvre. A smaller gripe (which is by no means unique to this book) is that the endnotes comprise both references and expansions on the main text, resulting in a lot of pointless flipping back and forth to the former in order to catch the latter. Rarely have form and content been so out of whack.
0 notes
sc3d-blog · 12 years ago
Text
For personal space we need personal cyberspace
Thanks to Anna Patalong for posting the Guardian article that got me thinking about this topic.
The internet is famously a refuge for human horrors of all kinds, where whatever your pet hate or perversion you can find like-minded people with whom to celebrate and practise it; but the common image of out-of-the-way websites and password-protected forums only applies to the most egregious examples, the tabloid fodder.
Most of the hate, the quotidian racism, sexism and general denigration of the unfamiliar and uncomfortable is in plain sight, and flourishes in the most anodyne, homogeneous, controlled environments. Those on the receiving end are well aware of it, but few realise exactly what the problem is.
After all, Facebook deletes the nasty stuff, right?
In the Guardian article linked to above, the activist Soraya Chemaly hits the bull on the horns: “It’s not about censorship in the end. It’s about choosing to define what is acceptable.” Indeed, it is about censorship, and that is precisely choosing to define what is acceptable. The problem is that acceptability is defined by the mores of the majority: the heavy hand of the Dead White Male decrees that mere depictions in cake of human sex organs are an abomination and must be suppressed, while groups celebrating rape culture are fine. Facebook claims that “it’s not Facebook’s job to decide what is acceptable”, but by removing anything they have already done so: it’s the removal of the labia cupcakes that legitimises the rape jokes.
This particular imbalance is likely on any service paid for by mainstream advertisers; but imbalance is inevitable in any centralized service: even if Facebook removed nothing of their own accord, they would be subject to local law, and hence two major sources of take-down: law enforcement agencies (with mixed results: goodbye child pornography, goodbye political activism) and corporations (goodbye anything that may infringe our IP). Facebook, in other words, is what it’s like living in a privatized country.
We can certainly work to change Facebook’s idea of what is acceptable, but its centralized control will always tend to conservative uniformity. The physical world, full of iniquity at worst and compromise at best as it is, is much more nuanced, and crucially tends to have the property that the more personal the space, the greater the control we have of it. Interior decoration is more or less up to the inhabitant, but Facebook does not hesitate to censor our personal profiles.
Here then is a new reason to support personal computing initiatives, like the FreedomBox, along with privacy and control of our private data: we’ve unwittingly ceded the very construction of society to the cloud, and I fear that if we don’t take it back, then increasingly the gains we’ve made in physical society will not only be slowed and blunted, but reversed in the new domain that was supposed to be the freest space of all.
0 notes
sc3d-blog · 12 years ago
Text
Time for Royal Mail to become Royal Email
A curious train of thought started as I was on the way to a local bookshop to buy a mother’s day card. I buy most of my greetings cards in this bookshop, but I’ve never bought a book there in the two and a half years I’ve lived in the area; I don’t buy many books, and mostly I buy through Amazon.
I say “through” because I find that most of the books I buy come from Amazon Marketplace sellers, not directly from Amazon, which is why I’m not sure that Amazon is altogether a Bad Thing: through their marketplace I have access to many small sellers, and it’s far from obvious that a foreign company operating an internet marketplace should be expected to pay UK taxes on that part of their profits. (I agree that they should do so for their direct UK trade.)
Further, I sell books through Amazon myself occasionally, something that I’d never managed to before without the combination of convenience and huge potential market offered for the small range of somewhat rebarbative volumes I offer. But it still feels wrong: the system is biased in favour of large sellers, most obviously, Amazon themselves. Pitting tiny booksellers against each other across the country seems like a way to make everyone miserable, even as it enables the really small sellers (like me!) to get into the game at all.
A healthier system would be one that encourages localism. A monetized version of Freecycle? I might use such a system if it enabled me to make a little more on my sales by not having to pay postage, though I’d want to be able to post inventory data just once: locally and to Amazon.
But then it struck me that the reason nationwide Amazon Marketplace works is that one can send books anywhere in the country for the same amount: Royal Mail’s universal delivery obligation, more or less. Designed to give access to national life for everyone, however physically isolated, this distortion of the market also tips the scales in favour of large organisations.
But it’s letters that are really important to social inclusion, not parcels, and increasingly we’re using the internet to communicate. What if internet access were gradually substituted for universal delivery as the Royal Mail’s obligation? Universal high-speed broadband access has been a political “priority” for some time now, while delivery has been slow. If over the course of a generation (to allow for education, so that access really is universal: there are still far too many people who are not internet-literate to make this fair today) we made this substitution we could complete this important infrastructural investment, at the same time removing the artificial distortion to the physical landscape and thereby encouraging local trade. The infinitely malleable online markets would quickly adapt to postage-per-mile. There would need to be exceptions, for example, for sending medicines by post, but this could be charged to the agency with the relevant obligation, in the case of medicine, the NHS, thus rendering the argument over and justification for different cases more transparent than at present.
Why give the Royal Mail this job, which has little to do with their current function or expertise? Because in fact it has everything to do with it: the function is a social one, the same as currently discharged by the Mail. The current fashion in the public sector when desiring to regulate a sector in the public interest is to set up a commissioning quango to buy in services from private sector organisations with the relevant technical competence (hopefully). This seems to me cart before horse when the mission is a social one. The Royal Mail and closely allied Post Office have decades of successful experience in delivering universal services (banking, benefits, bill paying…); this is “just” another one. Also, it seems hugely wasteful to spend a generation winding down one service while ramping up another that, modulo technology, is very similar in shape: a last-mile communications service.
It’d be good to have a financial incentive to actually buy books from my local bookshop.
0 notes
sc3d-blog · 12 years ago
Text
The reason I’ve not heard of your cool language is because it’s non-free
Dear developer/researcher/company, I’ve just found your cool language. It might be recently announced, or it might’ve been lurking on the ’net for years. I read your page about it, and I went to download it. Oh dear, it’s non-free. Maybe I’ll try it anyway, though I won’t be using it for all the obvious reasons. Still, that explains why I’ve never heard of it until now.
One thing that a lot of developers seem to overlook is the sheer inertia that any non-free program has to overcome. If your program is free, a lot of people who wouldn’t otherwise use it will do so, and there’s a good chance of its getting into free software distros (which, by the way, are not just for free OSes. Free software is also much more likely to spread via non-free products, hardware or software. All that goes double for language implementations, because issues of portability, maintenance and licensing are even acuter when you’re making an investment in writing code.
So increasingly I conclude that language authors who insist on non-free licensing Just Don’t Get It. There’s one big exception: if your language is secret sauce, then hoarding it to try to make a fortune is at least rational. An example is K/q, which appears to have done very nicely for its author, who also wisely chose as his market the financial sector, in which your clients are likely to be less affected by the factors mentioned above: they’ll have big budgets, and be writing code they don’t expect to distribute, and which may well not have a long shelf-life.
Otherwise, the landscape is littered with duh. Until a few years ago I excused those who had perhaps not fully understood, or even predated, the internet-mediated explosion of free software (though there are plenty of examples of earlier generations who understood the value of freedom as well as anyone, such as Donald Knuth, whose typesetting languages TeX and Metafont dominate mathematical typesetting, and have maintained a large presence in many technical fields for over 30 years), but no longer. Some authors eventually see the light: Carl Sassenrath, author of the Amiga OS, eventually freed his intriguing REBOL language after 15 years of getting nowhere (and, as far as I can tell from reading between the lines, running out of money). Development now seems to be at least trickling along, and from reading the commit logs, he’s just reviewing, not writing. Other projects are almost too late: StrongTalk, a SmallTalk variant with static typing, was freed by Sun in 2006, ten years after the developers were acquired by Sun, and since then, no-one seems to have taken it on, and this still 20-year-old code is languishing, probably never to be relevant. Then there are “close-but-no-cigar” efforts such as that of Mark Tarver, whose Lisp descendent, Qi, was proprietary, but who almost learnt the lesson with its successor, Shen, except that he forbade redistribution of derivative works which do not adhere to his spec for the language, rather than simply requiring that such derivatives change their name.
Stupidest of all, however, are the academic projects that suffer the same fate. Until around 1990, code was only a by-product: academics mostly wrote papers, and it was the published papers that contained the interesting information; you could recode the systems they described for yourself, and since programs were short and systems short-lived and incompatible, that was fine. Now, research prototypes often involve significant engineering, and unpublished code is wasted effort. It’s incomprehensible that publicly-funded researchers are even allowed not to publish their code, but some don’t. The most egregious current example is the Viewpoint Research Institute whose stellar cast, among them the inventor of Smalltalk, Alan Kay, is publishing intriguing papers, but only fragments of code: it’s as if he’s learnt nothing from the last 40 years.
I really hope this problem will die with the last pre-internet generation.
0 notes
sc3d-blog · 13 years ago
Text
GNOME 3 extensions: the madness must end
GNOME 3.6 has been released, and as usual that means updating a raft of extensions that I use to restore sanity to my desktop (mostly, switching off all the guff I don’t use, and putting old apps’ system tray icons back up at the top of the screen where they’re actually useful, rather than hidden away in the message tray).
Most of the extensions should just work, but they don’t, because GNOME Shell extensions are broken by design. Let me count the ways:
Extensions are declared to work with a version of GNOME, not a version of the API. Typically, the version in the metadata.json file includes a revision number. At the very worst (since in practice the GNOME team seems to change the API with every release, of which more below) it should include a minor version number (e.g. “3.6”). There should be absolutely no need, barring bugs in GNOME Shell, to update extensions across minor releases.
Minor API changes, like removing an underscore from an identifier, break extensions. Seriously dudes, stop it. We never had this pain with GNOME 2: Compiz extensions, on the whole, worked from one release to the next. GNOME 3 should be GNOME 3. By all means tweak the API as it goes along, but make it backwards compatible.
No API documentation. The fact that the Javascript bindings are generated automatically is great, but there’s no documentation for any language, and the few handy HOWTOs don’t make up for it.
The built-in tools (Looking Glass) are cute, but useless for development (Looking Glass stops the desktop, so you have to keep switching between it and your editor). It shouldn’t be harder to patch a GNOME Shell extension than a GTK app written in C, it should be much easier.
On the plus side, extensions.gnome.org works faster and is better now: for example, it allows you to install extensions that have not yet been checked on the latest version. For several extensions that was all I needed; for others I needed to change the version number manually; a couple needed trivial patches, and some others I had to switch to alternatives. This stupid dance was more than half the work in upgrading from Ubuntu 12.04 to 12.10, a fact which reflects only a little credit on Canonical, as I would class the amount of upgrade-induced work there as “acceptable” rather than “minimal”.
0 notes
sc3d-blog · 13 years ago
Text
A tale of two Ebens
Recently I’ve been following the spectacular progress of Raspberry Pi with awe and delight. Eben Upton’s brainchild of a computer that empowers and inspires children to learn to program has garnered a lot of attention in the adult world; it remains to be seen whether its plucky British inventor and retro computing appeal can translate into real success where it matters—with children.
Meanwhile, I’ve been led to some startling talks given recently by Eben Moglen, the founder of the Software Freedom Law Center, and general counsel for the Free Software Foundation. They both deal with the importance of free software and free hardware, and the second in particular recasts the arguments into the current political climate as a strategy for getting the attention of politicians. It’s in the second talk too that Moglen insists on the importance of children, describing children’s curiosity as the greatest force for social change that we possess.
Moglen’s urgent and inspiring call to action (“technologists must engage politically to save civilisation”) rather overshadows Upton’s (“get kids excited about programming with cool toys”), but they share at least two foci: not just children, but also small inexpensive computers; for Moglen spends some time talking about the Freedom Box project, to create small inexpensive low-power servers that everyone can use to regain control over their own data.
And indeed, people are already making the Freedom Box software stack run on Raspberry Pis.
Is there some hope that we may as well as claiming the children also be able to reclaim their parents? I find it rather sad that so much emphasis is placed on children, as it overlooks the childlike potential of adults: for that potential is all of ours while we yet live.
3 notes · View notes
sc3d-blog · 13 years ago
Text
Raspberry Pi: enough to go round?
Raspberry Pi’s most important achievement so far is in generating considerable publicity. The genius of its marketing is that it appeals to the current generation of tech journalists, who were raised on the 1980s home computers whose spirit it invokes. Whether it will appeal today’s children however is less obvious, and arguably more important. Having the inner working exposed (unlike home computers) should help, though it also exposes a serious failing (more on that later).
What measure of success?
How successful it is likely to be depends largely on what you think the problem is.
Reproducing the past
Raspberry Pi’s founder, Eben Upton, defines the challenge as getting a pocket-money-priced computer suitable for teaching children to program mass-produced. The R-Pi meets that definition for the sort of middle-class household that boasts a spare HDMI-compatible monitor plus an old mouse and keyboard and offers generous pocket-money, but elsewhere failing to count input devices and a display in the cost seems disingenuous, and an all-software solution that ran on just about anything, including phones and old PCs, and could be freely downloaded, would seem nearer the mark.
Upton would reply that merely adding “an app for that” doesn’t invite the child to program as the old home computers did (when you switched them on you were immediately presented with a programming environment). Compare this with games consoles and PCs: you can program them, but by default they offer games or an office desktop respectively.
More seriously, many parents quite rightly lock down the computers their children use to prevent their visiting undesirable web sites or installing new software, or even insist on their being supervised: forbidding conditions under which to nurture the sort of exploratory play by which we all learn to love programming. A separate device which belongs to the child, contains no sensitive parental data, and can’t go online addresses all these problems, and the child can be left alone with it as safely as with a book.
Rebuilding the workforce
So far, so good: we’ve recreated a small corner of the 1980s, and a small self-selecting segment of relatively privileged children will have a chance to become programmers. But we already need far more programmers today than when the children of the 1980s entered work, and we’ll need even more when today’s children grow up. To make up the shortfall, programming needs to go mainstream.
This is a challenge that’s already being met locally in many areas; Upton’s approach is to reach out to children directly via programming competitions (or “bribery” as he calls it); although this approach might work without substantial involvement by schools, it seems unwise not to make a serious push for inclusion in the school curriculum.
Remaking society
I believe, however, that programming is far more important and central a skill for the modern world than even its most ardent industrial cheerleaders suggest. Being a non-programmer today is like being illiterate two hundred years ago: it’s possible to get by without understanding anything about programming, but you end up relying heavily on others.
It’s a subtle point, because it’s rare that one needs to actually read or write code; rather, one needs to understand how computers work because increasingly they are embedded in, and hence govern, the systems we use to organise our lives.
Many competent and confident users of computers are reduced to impotent gibbering by machine malfunction, because learning how to operate a computer gives one very little insight into how they fail, whereas understanding bugs and other failures is central to learning how to program. It’s as if the person who could help you repair your blender is the one you’d ask how to cook a soufflé, or as if the person best able to navigate a car was a mechanic.
(Why computer systems are like this is a fascinating question whose answer involves the immaturity of the technology, its complexity, and the degree to which interface and systems design is still driven by technical rather than human considerations, but one I can’t elaborate on further here.)
Even more important is the mindset underlying programming: programmers, like scientists, believe that systems have rules which, if they can’t be looked up (“reading the source code”) can be discovered and codified (“reverse engineering”). But programming has an additional, empowering belief: that rules can be changed or replaced. In a society that is increasingly rule-bound and run by machines, a programmer’s mindset offers both the belief that things can be improved, and the tools to change them. That is why it’s essential that every child should understand at least the principles of programming, even if they never read or write a line of code as an adult.
Scaling up
Hence, it is necessary that programming become part of the core school curriculum, and it will be a good sign that it is embedding itself in our culture when it becomes so. Raspberry Pi has three major problems here: the hardware, the software, and connectivity.
Seeing to the bottom
The problem with the hardware is optically obvious, because of R-Pi’s lack of external casing: it’s entirely closed. You can see the components, but you can’t take it apart to see how it works, or modify it in any way. This is partly a result of the nanometre scale on which modern electronics is built, but it’s also caused by the increasingly draconian intellectual property régime under which we suffer. Unfortunately, the beating heart of the R-Pi, a Broadcom SoC (“System on a Chip”), is a prime example of this.
Even more unfortunately, it’s hard to see how anything like the R-Pi could be built without such regressive technology (in this case, via a special help from Broadcom that Upton, as an employee, managed to secure). All this means that the R-Pi is not only little use in firing the imagination of the next generation of hardware engineers (just as sorely needed, if not as numerously, as the software kind), but its hardware reinforces the “black-box, do not touch” mentality that its software is trying to break down.
Programming for all
Unfortunately, the programming environments provided, although open, are the standard machine-first arcane languages and tools that adults struggle with. Why not use something like Squeak Etoys, which is based on decades of research in both programming and teaching programming? (The plurality is part of the problem too: the R-Pi offers distracting choice, unlike old home computers which simply dumped you into their one built-in programming environment.) Fortunately, this is easy to fix: just update the software shipped with R-Pi.
Changing the world, Learning together
The final problem, connectivity, is a subtler one. Above, I mentioned that an advantage of giving a child their own device is that it need not be connected to the internet, and hence can be safe for them to play with unsupervised. But the R-Pi lacks other sorts of connection that are important. First, it can’t affect the world physically (though peripherals attached to it could). While the privacy and absolute power one enjoys in the virtual world inside the computer is exhilarating and empowering, children also love toys that have real world effects, and it’s an important aid to the imagination to see that one’s electronic creations can have direct physical outcomes.
The Logo systems of the ’70s and ’80s had a natural real-world extension in the form of drawing “turtles”; today we have Lego Mindstorms, but they’re expensive, and only partly open. What we need is a RepRap for children. Secondly, children want to play with each other; their computers should be able to network too. The One Laptop Per Child machines do this; R-Pis should be able to too (and again, fortunately, it’s mainly a matter of software).
Feeding the five million
In summary, Raspberry Pi is, closed hardware aside, a great platform that could help catalyse a much-needed revolution in the perception of programming. The good news is that the remaining technical steps are in software, and can be taken without the heroic step of re-mortgaging one’s house, as Upton did to fund R-Pi. The bad news is that the rest of the job is social, and hence much trickier to achieve than a bank loan.
8 notes · View notes
sc3d-blog · 13 years ago
Text
Computing can’t be left to teachers and business
Today the education secretary, Michael Gove, announced an overhaul of the ICT curriculum. This is good news and long overdue; having recently been castigated by the great, the good, and Google for our poor ICT teaching, the government has responded and is launching a campaign to overhaul the way ICT is taught: out with word processing and spreadsheets, and in with programming.
So, I should be happy: mission innocents saved accomplished? Sadly not; apart from the natural wariness of any “major government initiative”, this one falls down in two important ways.
First, Gove made his big announcement at an education industry gathering, BETT, and made several references to the importance of industry, both as determining what skills should be taught, and as partners to help teach them. In some vocational subjects, this makes sense, but ICT is a compulsory part of the core curriculum. It is not the function of education to prepare workers for business, and businesses are neither interested in nor competent to decide how to educate people. There’s a very obvious sense in which this is the case as far as ICT goes: children must be educated for life (even if the rhetoric of continuing education bears full fruit, adults simply cannot learn as children do), while the ICT skills that business demands change every few years. So here, as in traditionally academic subjects, we should view any industry involvement with the scepticism (and, dash it, cynicism) that it deserves.
Secondly, the announcement essentially removes ICT from the National Curriculum (the Whitehall-speak is “withdrawing the Programme of Study”). There are positive noises about supporting teachers with actual money, alongside the usual guff about liberating them, but the government are still washing their hands of responsibility of what is now the most important subject taught in schools.
As the culture minister, Ed Vaizey, understands, knowledge of how computers work is now as fundamental as literacy. It’s too basic and important to leave unsupervised even if, on the other hand, it’s so new and changing so rapidly that Gove is correct in saying that a traditionally-written curriculum “would become obsolete almost immediately”.
But the elements of computing do not change so rapidly, and they are the important bit. In the mid-’90s the undergrad course on computation theory I attended was thirty years old, and it was just as relevant and up-to-date as it had been when it was written. Many of the computer languages and operating systems in use today are at least as old, as are almost all of the concepts on which they are based.
And although many of the elements have been with us for decades, they are only now becoming fundamental to our society in the same way as literacy and numeracy. Very few people have any idea what that really means. Two crucial points need to be made: that everyone needs to learn how to program, not just programmers; and that programming is not just about computers, just as literacy is not just about speaking, reading and writing. The programming mindset can transform one’s world-view, and, like literacy, it’s particularly empowering, as it brings not only an understanding of how to decompose problems and invent rules to solve them, but the sense that the rule systems which govern our society are software, and can be changed.
Working out how to get all that across will certainly be aided by freeing teachers to experiment. Championing the process while capturing and disseminating best practice and embedding it in our culture needs central leadership. This is a far from unpromising announcement, but it’s only the beginning of the cultural shift we really need.
10 notes · View notes
sc3d-blog · 14 years ago
Text
Si monumentum requiris, ædifica!
Steve Jobs is dead, and the plaudits are rolling in. In a career lasting a little over thirty years, Apple, the company he co-founded and led to all its triumphs, has been part of every part of the computer revolution, and, for many people, has been the exemplar of each of its waves, from the Apple ][ to the Macintosh to the iPod to the iPhone to the iPad. Of the countless innovators who drove digital technology into our lives, Jobs was one of the few able to embed it in our hearts and minds, and by far the most influential. His uncompromising insistence on the marriage of form and function set expectations for industrial design well beyond his industry. Stephen Fry was not far from the mark when he told BBC News in August “I don’t think there is another human being on the planet who has been more influential in the last 30 years on the way culture has developed.”
Here in 2011, it’s amazing to look back over the last 30 years and see how completely computing technology has transformed our lives and culture. But step back a bit further and what’s odd is not how much, but how little we’ve achieved. Looking at Jobs’s achievements, I couldn’t help wondering how we’d ended up in a world where he really was the most successful innovator of his era. In 1968, Doug Engelbart, head of the Augmentation Research Center at Stanford University, showed us the future in one astonishing 90-minute demo, including the public debut of the mouse, hypertext, remote collaboration, videoconferencing and more, the result of just six years’ research by a team of 18. It would have been more believable coming from Stephen Fry’s alternate world in Making History. How did the computer revolution underperform so badly?
When beauty is just skin-deep
For the truth is, pretty much every revolutionary product Apple brought to market was merely the first successful incarnation of decades-old ideas. The Macintosh, released in 1984, embodied user interface work done at Xerox in the 1970s. The latest incarnation of its system software, Mac OS X, is based on NeXTSTEP, the operating system developed by NeXT, the company Jobs founded when he was fired by Apple in 1985. NeXTSTEP was itself built on the Mach 2.5 kernel1 and BSD UNIX, both ’80s repackagings of ’70s designs.2 The hardware was much the same: off-the-shelf parts, dressed beautifully. Jobs told Wired “When people look at an iMac, they think the design is really great, but most people don’t understand it’s not skin deep”, but that’s exactly what it was: where Apple excelled was industrial design. From the look to the choice and arrangements of internal parts,3 formal and functional elegance was all. And it helped the bottom line: as Jobs told Wired, “Focus does not mean saying yes, it means saying no,” and that meant simplicity not just in individual products, but in the product line as a whole. Despite not being the world’s largest manufacturer in most of its product categories, Apple boasts industry-beating economies of scale thanks to its tiny product range, which means that it uses more of each component than other manufacturers, and can thereby boost its profit margins, or demand uniquely customized components from its suppliers.
Tales vs Tools
The power of Apple’s brand is legendary inside and outside the company. There’s the story of the contractors who were laid off and yet came work for six months to finish their program, as well as the familiar pictures of fanbois4 queueing at Apple Stores around the globe for each new product release. Unfortunately, Apple’s brand has become an end in itself for the company. Digital devices are tools, but Apple realised, as car manufacturers had before them, that it was not merely more effective to sell them with stories and as symbols of a better, richer life, but as the tools the purchaser needed to create that life. In his commencement speech at Stanford in 2005, Steve Jobs said: “Don’t be trapped by dogma—which is living with the results of other people's thinking.” But Apple told its customers why and how to use its products, and from its experiments with allowing third-party manufacturers to build Mac-compatibles to its iTunes media sales and now its App Store first for iPhone and then Mac OS, it has leveraged its loyalty to dictate what customers may do with their devices.
What price freedom?
And so Jobs compromised with profit. I don’t think it had much to do with avarice; offering consumers desirable products is a tried-and-tested route to success, and computing has many visionaries whose inspiration has touched almost no-one outside the field: Doug Engelbart’s astonishing array of new technologies, Chuck Moore’s insistence that computers can be thousands of times simpler, David Gelernter’s reimagining of the user interface, Carl Sassenrath’s crusade against software complexity; all have spent decades doing incredible work that goes almost unused. Xerox, whose Star workstation from the late ’70s was the basis for the Macintosh, came up with the “memory prosthesis” in the ’90s, an idea which makes perfect sense with today’s smart mobile phones, and which no-one has implemented. The obscure publicly-funded Viewpoints Research Institute, whose board of advisers reads like a who’s who of computer science, and whose small research staff all have stellar track records are trying to rewrite the rules of building software (and as far as one can tell, succeeding), but there are few tangible results. From academics to entrepreneurs, none of these brilliant inventors has had an iota of Steve Jobs’s direct impact on the everyday world. Sadly, one of the most brilliant, Jef Raskin, who started development of the Macintosh in 1979, was forced out of Apple by Steve Jobs, failed to find commercial success elsewhere, and in 2004, like Jobs, was diagnosed with pancreatic cancer, from which he died a year later. His book on interface design emphasizes the importance of software design based on how people actually use computers, rather than on superficial attractiveness.
There’s another, more fundamental sort of freedom that Jobs eschewed in his determination to control users’ devices: the freedom to study how the software and hardware works, modify it, and share those modifications. When most users will never want, or, in most cases, be capable to do such a thing, it’s an option that has proven to be of powerful benefit to society, from the days of computing pioneers sharing programming techniques, to the rise of the free and open source software movements. Under Jobs, Apple itself both used and contributed to open source and free software, but usually only when forced to by free software licenses, and never as a corporate badge of pride. Apple’s brand emphasised revolution, but its product policies encouraged conformity. Even Steve Jobs could not conform the market square to the circle of free users.
Si monumentum requiris, ædifica!
And so we are left with an immense challenge: the greatest popularizer of computing technology is dead, but the digital revolution has only achieved a fraction of its potential. If we want his monument, we’ll have to build it ourselves.
Not a microkernel; Mach didn’t become that until the (final) version 3.0. ↩︎
The original Mac OS was newer, built from the ground up by Apple for the Macintosh; but it was not up to the job of running multiple applications on the same machine, and despite two separate attempts to build a proper operating system from scratch, Pink and Copland, Apple never managed it. An ex-Apple employee, Jean-Louis Gassée, proved it could be done with Be Inc.’s BeOS, but Steve Jobs’s charisma and the more mature state of NeXTSTEP won Apple over when they were looking for a basis for Mac OS X. ↩︎
The same Wired article quotes a friend of Jobs, in the early days, describing how he would dictate “how to lay out the P[rinted]C[ircuit] board so it would look nice”. ↩︎
From the French faune des bois, meaning “wood-faun”, a creature that happily spends the night outside, sometimes pictured with the tail of an ass. ↩︎
24 notes · View notes
sc3d-blog · 14 years ago
Text
Old-hat futurism
I had not previously heard of Ben Hammersley, but he says he “helps people understand the modern world.” Recently, he gave a speech to the IAAC (Information Assurance Advisory Council), and the tone was very much “you are all living in the past”. He makes some excellent points in the second half of his talk, about how security theatre is widely seen by the public as an oppressive sham, and how it’s no longer acceptable for leaders to be proud of their technical incompetence, but the first half is both out of date (worrying for a self-described futurist) and out of kilter (worrying for someone supposedly acting as a “translator” between those inventing the future and those running the show). As often with people who get it badly wrong, he starts from the right premise: he quotes William Gibson “the future is already here, just not evenly distributed”; and then ignores it, going on about how our lives are now all on Facebook; we all expect people to be instantly available on the end of a phone 24/7 etc. etc. This is, of course, all true…for the tiny minority in power. But he’s missed the other (and sharper) edge of Gibson’s blade; the future, like the wealth to which it's so tightly linked, is getting more and more unevenly distributed: not only are there people half-way round the planet still living in the stone age, but there are people a few hundred yards away living half in the present and half in the 1970s. That’s a much more important split than Hammersley’s between people who grew up before and after the end of the cold war. Sentiments like “Facebook, Twitter, Google and all the rest are, in many ways the very definition of modern life in the democratic west” are just evidence of the echo-chamber mentality those three engender. And anyone who still believes the absolute “networks beat hierarchies” simply hasn’t paid attention since 9/11. Scariest of all is that, tiny as the minority it represents is, this view really _is_ reality, in the sense it's true for everyone with power, who have a huge impact on what happens to everyone else. I’m not sure I really want our rulers to understand Hammersley’s future (delay the inevitable as long as possible!), though I also suspect many of them have a much better grasp of it than he gives them credit for.
26 notes · View notes
sc3d-blog · 14 years ago
Text
New Programmers Wanted For Old Stuff
Although computer science seems to have lost the glamour it had in the ’80s, there still seems to be a steady stream of volunteers to work on all sorts of exciting free and open-source software projects (even though my alma mater is having trouble finding good applicants to read Computer Science; more on that story earlier, and also, I hope later).
But what about the less exciting stuff? The fundamental tools and applications that we programmers still use, directly or indirectly? I mean GNU coreutils, GNU autotools, not to mention pieces that we take even more for granted, such as the shell, the C library and the kernel. (In case this all sounds disguisedly Linux-centric and you’re wondering why I didn’t just say “bash, glibc and Linux”, that’s because while I work mostly on GNU/Linux, I’m mostly interested in portable programs.)
“But isn’t this all legacy stuff”, I hear you cry? If you never stray from the comforts of Eclipse, then maybe yes, but there are still plenty typing “ls” and “grep”. If you’re one such, and you contribute to free software, why not help out? It’s not all legacy code in maintenance mode, and we certainly need help. Rather like the MS-DOS team in the early ’90s, there’s a tiny core of maybe a few dozen major contributors maintaining much of the command-line software stack (outside the kernel and gcc). Unlike them, we are mostly not paid to do so; but we do have many opportunities for innovation and invention.
The UNIX command-line may seem like a dead backwater, of interest only to the dull writers of sclerotic standards, but that’s to misinterpret effect for cause. Yes, it’s mature, and hence capable of standardization, and that’s a good thing: even 10 years ago, many UNIX boxes lacked a decent POSIX implementation, whereas now almost all have one (or can get one by adding GNU). The ISO C99 standard added important features to the language. GNU autotools has matured from a somewhat cranky portability tool to a great leveller, making it easy to write code that will not build and run on any major OS (yes, including Windows), thanks not just to increasingly maturity and stability, but also to new projects such as the amazing gnulib, which papers over the cracks in a wide range of POSIX API implementations and provides useful data structures and other APIs missing from the standards, and autoconf-archive, which supplies autoconf macros for dozens of common configuration tasks and for hundreds of languages, tools and libraries.
Using these tools I was able to remove all platform-specific C from GNU Zile, a cut-down Emacs clone, cutting its code size by about 2,000 lines (20% of the code base), and slash the size of its configure.ac (build system configuration file). All this while adding a test suite with nearly 100 tests, plus a few extra features.
And it’s not just Zile: stalwarts like GNU grep and coreutils have been made over, and, largely unnoticed by users, are looking much prettier under the hood (though there are important bug fixes, new features and performance improvements too). Even Emacs, with its immense code-base and ancient build system, is gradually being brought up to date.
The most exciting thing for me is the synergy: the more the tools are improved, the greater the leverage obtained when they are used, and the more they are used, the less effort is required to maintain the packages, and the work becomes easier: less time wrestling the system, more time improving it. And more fun: if you think C is hard, dull and slow work, think again. We too can have quick rebuilding thanks to ccache, easy bug-bashing with Valgrind, not to mention code completion and navigation either in an IDE like Anjuta or from the evergreen Emacs, which is finally integrating and polishing a decade of, until now largely invisible (and unusable), work on modern IDE tools.
Unfortunately at the moment this reduction in effort is being absorbed simply by enabling a tiny team to keep more packages up to scratch, but wouldn’t it be great if more people joined in?
Next time: the future of the past: it gets even more exciting!
9 notes · View notes
sc3d-blog · 14 years ago
Text
Lots in translation
Recently I went to a recital by a couple of friends, and discovered that I’m not the only one who gets my recital programme translations from The Lied, Art Song and Choral Texts Page. Many translators are happy to let you use their translations in concert programmes, suitably acknowledged, and the original lyrics, where available, are reliable, usually checked against original sources, often include variants set by different composers (who often seem either too lazy to check an original copy, or think they can improve the text), and are out of copyright, so you don’t need permission to use them.
There are some mistakes in the translations; as I’ve said before for ebooks, if you find mistakes, please fix ’em: Emily Ezust, the maintainer of the site, is happy to receive corrections. Most are useful for study; many are not bad for programmes, though I prefer a little more poetry in a programme translation than in a strict gloss for the singer.
Back to the recital, and there was one oddity which even copy-andpaste couldn’t explain. The last song in the programme was Schubert’s Ave Maria (why, in a recital, sing it in Latin?), and the translation was the version of Hail Mary familiar to Anglophone catholics for some centuries. But it too had a copyright notice!
1 note · View note
sc3d-blog · 14 years ago
Text
Proof sought
Christmas 2010 was when half the people I know got Kindles. I even got one myself, as a Christmas present for my girlfriend. Suddenly, there are millions more pairs of eyeballs on digital books, many thousands of which belong to acute brains adept at finding errors and misprints. And nearly all of them, I fear, are going to waste.
There are three reasons for this.
First, the Kindle does not encourage you to read free books. (There’s little point proofing commercial books, not so much because they tend to contain fewer errors as because I’ve never come across a non-academic publisher who cares; and anyway, if you’ve paid for a book, shouldn’t someone have proof-read it? I’d be interested to see publisher initiatives here, though; bug bounties, anyone?) This problem is easily fixed, though: many sites offer a huge range of books you can download to your Kindle via its built-in web browser, including the oldest, biggest, and best free online book repositories, Project Gutenberg. So go to, sample the tens of thousands (and rapidly increasing) treasure trove of public domain works, and never pay for a downloaded book again.
Secondly, no ebook or reading program I’ve yet seen has built-in functionality for noting errata. I use bookmarks in FBReaderJ; Kindle users can use notes. But even this primitive method is easy to use and I find it rarely interrupts the flow of reading, even for books containing hundreds of errors. So, note any errors you find!
Thirdly, online libraries often don’t make it obvious that they welcome reports of typos and errors (they do!), or make it easy to send them. (Project Gutenberg changed its email addresses last year, to reduce spam. This wasted my time last year when they introduced the “2010” suffix in April, and again this year when I had to go and check to see if they’d decided to update automatically every year. It seems not. Maybe in April? Really, Gutenberg, just use spam filters, it’s what they’re for.) Many other sites repackage texts from Gutenberg; some fail to update to the latest version, such as Feedbooks, whose books work better on my phone than Gutenberg’s own, but often contain errors already fixed on Gutenberg. (They told me they have to apply updates to their books by hand; I have offered them help with automatic updating, which isn’t rocket science, using tools that programmers use all the time, without success so far.) It’s clearly best to report errors to the original source of the text if at all possible, but if you can’t, don’t worry. Spend a couple of minutes finding how to report typos to wherever you got your book from, and try it; you’ll soon find out if they’re unappreciative.
We have an amazing resource here, and it’ll only get better (if it isn’t. Digital editions, unlike their paper forbears, need not go out of print: errors can be fixed forever. If every reader of a free ebook reports half a dozen errors, even dodgily scanned texts will soon shine. And this is for everybody: free ebooks can be printed and bound, so allowing imaginative publishers, libraries and donors to get them into the hands of those who can’t afford ordinary books, let alone a Kindle.
But aren’t lots of people doing this already? It doesn’t seem so: in 2010 Project Gutenberg started an automated errata tracker, which allocates each new errata report a different number. By the end of 2010 it was up to about 500; by contrast, in 2010 alone several different open source software projects racked up over 100,000 bug reports each [Stop Press: Gutenberg now seems to have abandoned automated erratum numbering.]. Despite its richness, Gutenberg has a handful of full-time employees, and runs on volunteer labour and donations (by definition, they can’t ask for money for their books). And that’s just the biggest Gutenberg project, in the US. To avoid exposing itself to the vagaries of international copyright law across different régimes, the various Gutenberg sites in different countries are entirely independent. Gutenberg Australia, which is the second-biggest original source for English books after Gutenberg US, is run by one person, the heroic Colin Choat, in his spare time.
So, please help!
1 note · View note
sc3d-blog · 14 years ago
Text
Unchanging rhetoric on higher education
I tried to post this as a comment to the Guardian article Universities must cut private schools intake, says Simon Hughes, but the web site said “Your browser sent a request that this server could not understand.”
The tune of the government, sadly but unsurprisingly, never changes, and continues in its hypocritical vein, suggesting that ministers are not really interested in improving access to the élite universities most of them went to.
If they were, then we might hope ministers to tell us how things have changed over time (how is access to Oxbridge now compared to 30 years ago? much better than it was, but still some way to reflecting society), to laud successes, and to commission and act on research to improve things further.
And they might stop implying that what Oxbridge want to do is keep the plebs out and keep educating the rich.
I was briefly acting Director of Studies in Computer Science at one of the bigger Cambridge colleges in the late 90s. My successor, a state-school educated Northener, had to address the problem that applications were falling off (apparently in the 21st century, computers are no longer cool), and there were barely enough applicants for the places available, let alone good applicants. So, he went on the road, mostly to state schools whose students weren't applying to Cambridge. In jeans and T-shirt he'd talk to kids, encouraging them to apply, and to their teachers. Often, it was among the teachers he'd meet the most resistance: "Even if we did get our kids to apply to Cambridge, we wouldn't apply to you, you're from a posh college," was one of the more bizarre comments he got.
Cambridge has had a university-wide programme to widen access which has been in place since I was an undergrad there nearly twenty years ago. My college has its own scheme too, and staff were encouraged to do the sort of thing my friend did. The University is desperate to get good students (even at the peak of the Computer Science boom, in the early 90s, the department was worried that the maths skills of applicants were weak), and it doesn't care where they come from.
The way the government is increasingly piling up-front costs on to students, the answer is going to be "from rich families and/or abroad". The new funding system may be rational, it may even be fair, but it won't broaden access, and my friend will still be left wondering where on earth he is supposed to find the next generation of computer scientists.
0 notes