#technically this is a part of a future img set but the problem is
Explore tagged Tumblr posts
moeblob · 8 months ago
Text
Tumblr media
RK900.
125 notes · View notes
technato · 7 years ago
Text
Carnegie Mellon is Saving Old Software from Oblivion
A prototype archiving system called Olive lets vintage code run on today’s computers
.carousel-inner{ height:525px !important; }
Illustration: Nicholas Little
In early 2010, Harvard economists Carmen Reinhart and Kenneth Rogoff published an analysis of economic data from many countries and concluded that when debt levels exceed 90 percent of gross national product, a nation’s economic growth is threatened. With debt that high, expect growth to become negative, they argued.
This analysis was done shortly after the 2008 recession, so it had enormous relevance to policymakers, many of whom were promoting high levels of debt spending in the interest of stimulating their nations’ economies. At the same time, conservative politicians, such as Olli Rehn, then an EU commissioner, and U.S. congressman Paul Ryan, used Reinhart and Rogoff’s findings to argue for fiscal austerity.
Three years later, Thomas Herndon, a graduate student at the University of Massachusetts, discovered an error in the Excel spreadsheet that Reinhart and Rogoff had used to make their calculations. The significance of the blunder was enormous: When the analysis was done properly, Herndon showed, debt levels in excess of 90 percent were associated with average growth of positive 2.2 percent, not the negative 0.1 percent that Reinhart and Rogoff had found.
Herndon could easily test the Harvard economists’ conclusions because the software that they had used to calculate their results—Microsoft Excel—was readily available. But what about much older findings for which the software originally used is hard to come by?
You might think that the solution—preserving the relevant software for future researchers to use—should be no big deal. After all, software is nothing more than a bunch of files, and those files are easy enough to store on a hard drive or on tape in digital format. For some software at least, the all-important source code could even be duplicated on paper, avoiding the possibility that whatever digital medium it’s written to could become obsolete.
Saving old programs in this way is done routinely, even for decades-old software. You can find online, for example, a full program listing for the Apollo Guidance Computer—code that took astronauts to the moon during the 1960s. It was transcribed from a paper copy and uploaded to GitHub in 2016.
While perusing such vintage source code might delight hard-core programmers, most people aren’t interested in such things. What they want to do is use the software. But keeping software in ready-to-run form over long periods of time is enormously difficult, because to be able to run most old code, you need both an old computer and an old operating system.
You might have faced this challenge yourself, perhaps while trying to play a computer game from your youth. But being unable to run an old program can have much more serious repercussions, particularly for scientific and technical research.
Along with economists, many other researchers, including physicists, chemists, biologists, and engineers, routinely use software to slice and dice their data and visualize the results of their analyses. They simulate phenomena with computer models that are written in a variety of programming languages and that use a wide range of supporting software libraries and reference data sets. Such investigations and the software on which they are based are central to the discovery and reporting of new research results.
Imagine that you’re an investigator and want to check calculations done by another researcher 25 years ago. Would the relevant software still be around? The company that made it may have disappeared. Even if a contemporary version of the software exists, will it still accept the format of the original data? Will the calculations be identical in every respect—for example, in the handling of rounding errors—to those obtained using a computer of a generation ago? Probably not.
Researchers’ growing dependence on computers and the difficulty they encounter when attempting to run old software are hampering their ability to check published results. The problem of obsolescent software is thus eroding the very premise of reproducibility—which is, after all, the bedrock of science.
The issue also affects matters that could be subject to litigation. Suppose, for example, that an engineer’s calculations show that a building design is robust, but the roof of that building nevertheless collapses. Did the engineer make a mistake, or was the software used for the calculations faulty? It would be hard to know years later if the software could no longer be run.
That’s why my colleagues and I at Carnegie Mellon University, in Pittsburgh, have been developing ways to archive programs in forms that can be run easily today and into the future. My fellow computer scientists Benjamin Gilbert and Jan Harkes did most of the required coding. But the collaboration has also involved software archivist Daniel Ryan and librarians Gloriana St. Clair, Erika Linke, and Keith Webster, who naturally have a keen interest in properly preserving this slice of modern culture.
Bringing Back Yesterday’s Software
The Olive system has been used to create 17 different virtual machines that run a variety of old software, some serious, some just for fun. Here are several views from those archived applications
1/8
NCSA Mosaic 1.0, a pioneering Web browser for the Macintosh from 1993.
2/8
Chaste (Cancer, Heart and Soft Tissue Environment) 3.1 for Linux from 2013.
<img src="https://spectrum.ieee.org/image/MzEzMTUzMg.jpeg&quot; data-original="/image/MzEzMTUzMg.jpeg" id="618441086_2" alt="The Oregon Trail 1.1, a game for the Macintosh from 1990.”> 3/8
The Oregon Trail 1.1, a game for the Macintosh from 1990.
<img src="https://spectrum.ieee.org/image/MzEzMTUzNQ.jpeg&quot; data-original="/image/MzEzMTUzNQ.jpeg" id="618441086_3" alt="Wanderer, a game for MS-DOS from 1988.”> 4/8
Wanderer, a game for MS-DOS from 1988.
<img src="https://spectrum.ieee.org/image/MzEzMTU1MA.jpeg&quot; data-original="/image/MzEzMTU1MA.jpeg" id="618441086_4" alt="Mystery House, a game for the Apple II from 1982.”> 5/8
Mystery House, a game for the Apple II from 1982.
6/8
The Great American History Machine, an educational interactive atlas for Windows 3.1 from 1991.
7/8
Microsoft Office 4.3 for Windows 3.1 from 1994.
8/8
ChemCollective, educational chemistry software for Linux from 2013.
$(document).ready(function(){ $(‘#618441086’).carousel({ pause: true, interval: false }); });
Because this project is more one of archival preservation than mainstream computer science, we garnered financial support for it not from the usual government funding agencies for computer science but from the Alfred P. Sloan Foundation and the Institute for Museum and Library Services. With that support, we showed how to reconstitute long-gone computing environments and make them available online so that any computer user can, in essence, go back in time with just a click of the mouse.
We created a system called Olive—an acronym for Open Library of Images for Virtualized Execution. Olive delivers over the Internet an experience that in every way matches what you would have obtained by running an application, operating system, and computer from the past. So once you install Olive, you can interact with some very old software as if it were brand new. Think if it as a Wayback Machine for executable content.
To understand how Olive can bring old computing environments back to life, you have to dig through quite a few layers of software abstraction. At the very bottom is the common base of much of today’s computer technology: a standard desktop or laptop endowed with one or more x86 microprocessors. On that computer, we run the Linux operating system, which forms the second layer in Olive’s stack of technology.
Sitting immediately above the operating system is software written in my lab called VMNetX, for Virtual Machine Network Execution. A virtual machine is a computing environment that mimics one kind of computer using software running on a different kind of computer. VMNetX is special in that it allows virtual machines to be stored on a central server and then executed on demand by a remote system. The advantage of this arrangement is that your computer doesn’t need to download the virtual machine’s entire disk and memory state from the server before running that virtual machine. Instead, the information stored on disk and in memory is retrieved in chunks as needed by the next layer up: the virtual-machine monitor (also called a hypervisor), which can keep several virtual machines going at once.
Each one of those virtual machines runs a hardware emulator, which is the next layer in the Olive stack. That emulator presents the illusion of being a now-obsolete computer—for example, an old Macintosh Quadra with its 1990s-era Motorola 68040 CPU. (The emulation layer can be omitted if the archived software you want to explore runs on an x86-based computer.)
The next layer up is the old operating system needed for the archived software to work. That operating system has access to a virtual disk, which mimics actual disk storage, providing what looks like the usual file system to still-higher components in this great layer cake of software abstraction.
Above the old operating system is the archived program itself. This may represent the very top of the heap, or there could be an additional layer, consisting of data that must be fed to the archived application to get it to do what you want.
The upper layers of Olive are specific to particular archived applications and are stored on a central server. The lower layers are installed on the user’s own computer in the form of the Olive client software package. When you launch an archived application, the Olive client fetches parts of the relevant upper layers as needed from the central server.
Illustration: Nicholas Little
Layers of Abstraction: Olive requires many layers of software abstraction to create a suitable virtual machine. That virtual machine then runs the old operating system and application.
That’s what you’ll find under the hood. But what can Olive do? Today, Olive consists of 17 different virtual machines that can run a variety of operating systems and applications. The choice of what to include in that set was driven by a mix of curiosity, availability, and personal interests. For example, one member of our team fondly remembered playing The Oregon Trail when he was in school in the early 1990s. That led us to acquire an old Mac version of the game and to get it running again through Olive. Once word of that accomplishment got out, many people started approaching us to see if we could resurrect their favorite software from the past.
The oldest application we’ve revived is Mystery House, a graphics-enabled game from the early 1980s for the Apple II computer. Another program is NCSA Mosaic, which people of a certain age might remember as the browser that introduced them to the wonders of the World Wide Web.
Olive provides a version of Mosaic that was written in 1993 for Apple’s Macintosh System 7.5 operating system. That operating system runs on an emulation of the Motorola 68040 CPU, which in turn is created by software running on an actual x86-based computer that runs Linux. In spite of all this virtualization, performance is pretty good, because modern computers are so much faster than the original Apple hardware.
Pointing Olive’s reconstituted Mosaic browser at today’s Web is instructive: Because Mosaic predates Web technologies such as JavaScript, HTTP 1.1, Cascading Style Sheets, and HTML 5, it is unable to render most sites. But you can have some fun tracking down websites composed so long ago that they still look just fine.
What else can Olive do? Maybe you’re wondering what tools businesses were using shortly after Intel introduced the Pentium processor. Olive can help with that, too. Just fire up Microsoft Office 4.3 from 1994 (which thankfully predates the annoying automated office assistant “Clippy”).
Perhaps you just want to spend a nostalgic evening playing Doom for DOS—or trying to understand what made such first-person shooter games so popular in the early 1990s. Or maybe you need to redo your 1997 taxes and can’t find the disk for that year’s version of TurboTax in your attic. Have no fear: Olive has you covered.
On the more serious side, Olive includes Chaste 3.1. The name of this software is short for Cancer, Heart and Soft Tissue Environment. It’s a simulation package developed at the University of Oxford for computationally demanding problems in biology and physiology. Version 3.1 of Chaste was tied to a research paper published in March 2013. Within two years of publication, though, the source code for Chaste 3.1 no longer compiled on new Linux releases. That’s emblematic of the challenge to scientific reproducibility Olive was designed to address.
Illustration: Nicholas Little
To keep Chaste 3.1 working, Olive provides a Linux environment that’s frozen in time. Olive’s re-creation of Chaste also contains the example data that was published with the 2013 paper. Running the data through Chaste produces visualizations of certain muscle functions. Future physiology researchers who wish to explore those visualizations or make modifications to the published software will be able to use Olive to edit the code on the virtual machine and then run it.
For now, though, Olive is available only to a limited group of users. Because of software-licensing restrictions, Olive’s collection of vintage software is currently accessible only to people who have been collaborating on the project. The relevant companies will need to give permissions to present Olive’s re-creations to broader audiences.
We are not alone in our quest to keep old software alive. For example, the Internet Archive is preserving thousands of old programs using an emulation of MS-DOS that runs in the user’s browser. And a project being mounted at Yale, called EaaSI (Emulation as a Service Infrastructure), hopes to make available thousands of emulated software environments from the past. The scholars and librarians involved with the Software Preservation Network have been coordinating this and similar efforts. They are also working to address the copyright issues that arise when old software is kept running in this way.
Olive has come a long way, but it is still far from being a fully developed system. In addition to the problem of restrictive software licensing, various technical roadblocks remain.
One challenge is how to import new data to be processed by an old application. Right now, such data has to be entered manually, which is both laborious and error prone. Doing so also limits the amount of data that can be analyzed. Even if we were to add a mechanism to import data, the amount that could be saved would be limited to the size of the virtual machine’s virtual disk. That may not seem like a problem, but you have to remember that the file systems on older computers sometimes had what now seem like quaint limits on the amount of data they could store.
Another hurdle is how to emulate graphics processing units (GPUs). For a long while now, the scientific community has been leveraging the parallel-processing power of GPUs to speed up many sorts of calculations. To archive executable versions of software that takes advantage of GPUs, Olive would need to re-create virtual versions of those chips, a thorny task. That’s because GPU interfaces—what gets input to them and what they output—are not standardized.
Clearly there’s quite a bit of work to do before we can declare that we have solved the problem of archiving executable content. But Olive represents a good start at creating the kinds of systems that will be required to ensure that software from the past can live on to be explored, tested, and used long into the future.
This article appears in the October 2018 print issue as “Saving Software From Oblivion.”
About the Author
Mahadev Satyanarayanan is a professor of computer science at Carnegie Mellon University, in Pittsburgh.
Carnegie Mellon is Saving Old Software from Oblivion syndicated from https://jiohowweb.blogspot.com
14 notes · View notes
aion-rsa · 4 years ago
Text
How Saving Private Ryan Influenced Medal of Honor and Changed Gaming
https://ift.tt/3azDVUj
The legacies of Medal of Honor and Saving Private Ryan have gone in wildly different directions since the late ’90s. The latter is still thought of as one of the most influential and memorable movies ever made. The former is sometimes referred to as a “Did You Know?” piece of Call of Duty‘s history or maybe just proof the PS1 had a couple of good first-person shooters.
It wasn’t always that way, though. There was a time when the fates of Medal of Honor and Saving Private Ryan seemed destined to be forever intertwined. After all, Medal of Honor was essentially pitched as the game that would do to WW2 games what Saving Private Ryan did for WW2 films.
That never quite happened, but the ways that the fates of those two projects began, diverged, met, and ultimately split helped shape the future of gaming in ways you may may not know about.
Steven Spielberg’s Son and GoldenEye 007 Change the Fate of the WW2 Shooter
In case Ready Player One didn’t make it clear, Steven Spielberg has always loved video games. While you’d think that the reception to 1982’s E.T. for the Atari (a game so bad that millions of unsold copies were infamously buried in a landfill) would have soured him on the format, Spielberg remained convinced that gaming was going to play a big role in the future of storytelling and entertainment. Spielberg even co-wrote a sometimes overlooked 1995 LucasArts adventure game called The Dig. 
It was around that same time that Spielberg also expressed his interest in making a video game based on his fascination with WW2. In fact, in the early ‘90s, designer Noah Falstein started working on a WW2 game after a conversation with Spielberg reportedly piqued his own interest in that idea.
The project (which was known as both Normandy Beach and Beach Ball at the time) was fascinating. It would have followed two brothers participating in the D-Day invasion: one on the beaches of Normandy and one who was dropping in behind enemy lines. Players would have swapped between the two brothers (try to push aside any Rick and Morty jokes for the moment) as they fought through the war and finally saw each other again. 
It was a great idea, but when Falstein took it to DreamWorks Interactive (the gaming division of the DreamWorks film studio that Spielberg co-owned), he was surprised to be greeted with a cold shoulder. It seems that the DreamWorks Interactive team felt it would be hard to sell a game to kids that was based on a historical event as old as WW2. Work on the project quietly ended months after it had begun. 
While it’s probably wild to think of a game designer that had a hard time pitching a WW2 game in the ‘90s, you have to remember that widespread cultural interest in WW2 at that time was still fairly low. There were WW2 games released prior to that point, but most of them either made passing references to the era (such as Wolfenstein) or were hardcore strategy titles typically aimed at an older audience. Most studios believed that kids wanted sci-fi and fantasy action games, and many of them weren’t willing to invest heavily on the chance they were wrong.
Spielberg shared that concern, but he saw it slightly differently. As someone who believed that WW2 was this event that shaped the generation that lived through it and those that came after, he felt this desire to inform people of the war’s impact and intrigue through the considerable means and talent available to him. That strategy obviously included Saving Private Ryan, but he was especially interested in reaching that same young audience that DWI felt would largely ignore a WW2 game. 
Legend has it that a lightbulb went off in Spielberg’s head as he watched his son play GoldenEye 007 for N64. Intrigued by both his son’s fascination with that shooter, and the clear advances in video game technology it represented, he took time away from Saving Private Ryan’s post-production process, visited the DWI team, and told them that he wanted to see a concept for a WW2 first-person shooter set in Europe and named after the Medal of Honor. If the team wasn’t stunned yet, they certainly would be when Spielberg told them that they had one week to show him a demo.
Doubtful that they could produce a compelling demo in such a short amount of time, doubtful the PS1 could handle such an ambitious FPS concept, and still very much doubtful that gamers wanted to play a WW2 shooter, the team reworked the engine for the recent The Lost World: Jurassic Park PlayStation game and used it as the basis for what was later described as a shoestring proof of concept.
It may have been pieced together, but what DWI came up with was enough to excite Spielberg and, more importantly, excite the game’s developers. Suddenly, people were starting to buy into the idea that this whole thing could work and was very much worth doing. As the calendar turned to 1998 and Saving Private Ryan became a blockbuster that was also changing the conversation about World War 2, it suddenly felt like the DWI team might just have a hit on their hands.
Unfortunately, not everyone was on board with the idea, and those doubts would soon change the trajectory of the game.
Columbine and Veteran Concerns Force Medal of Honor to Move Away from Saving Private Ryan
Saving Private Ryan was widely praised for its brutal authenticity that effectively conveyed the horrors of war, as well as its technical accomplishments that changed the way films are made and talked about. At first, it seemed like the Medal of Honor team intended to attempt to recreate both of those elements. 
On the technical side of things, the developers were succeeding in ways that their makeshift demo barely suggested was possible. While the team was right that developing an FPS on PlayStation meant working around certain restrictions (they couldn’t get daytime levels to look right so everything in the original game happens at night), the PlayStation proved to be remarkably capable in other ways. Because the team was limited from a purely visual perspective, they decided to focus on character animations and AI in order to “sell” the world. 
While they’re easy to overlook now, the original Medal of Honor did things with enemy AI that few gamers had seen at the time. Enemies reacted to being shot in ways that suggested they were more than just bullet sponges. They’d drop their weapons, lose their helmets, scramble for cover…it all contributed to the sensation of battling actual humans. Well, not actual humans but rather Nazis. In fact, the thrill of feeling every bullet you fired at Nazis was one of the things that excited the team early on. Both developers and players fondly recall being able to do things like make a dog fetch a grenade and take it back to his handler to this day. 
Just as it was in Saving Private Ryan, sound design was a key part of what made Medal of Honor work. Everything ricocheted and responded with a level of authenticity that perfectly complemented the film-like orchestral score that they had commissioned from game composer Michael Giacchino. The quality of the game’s sound is partially attributed to the contributions of Captain Dale Dye who helped ensure the authenticity of Saving Private Ryan and did the same for Medal of Honor. Initially, Dye was doubtful the game could be on the same level as the film.
Actually, Dye’s feedback was one of the earliest indications that some veterans were going to be very apprehensive of the idea of turning war into a video game like Medal of Honor. Dye eventually saw that their intentions were good, but the team soon received another wake-up call when Paul Bucha, a Medal of Honor recipient and then-president of the Congressional Medal of Honor Society, wrote a letter to Steven Spielberg that essentially shamed him for his involvement with the game and demanded that he remove the Medal of Honor name from the project. At that point in development, such a change could have meant the project’s cancellation. 
That wasn’t the only problem that suddenly emerged. In April 1999 (six months before Medal of Honor’s release), the Columbine High School massacre occurred and changed the conversation about violence in entertainment (especially video games). Reports indicate that Medal of Honor was, at that time, a particularly violent video game clearly modeled after the brutality of Saving Private Ryan that also featured an almost comical level of blood that some who saw the early versions of the title compared to The Evil Dead. Suddenly, the team felt apprehensive of what they had been going for.
These events and concerns essentially encouraged the Medal of Honor team to step away from Saving Private Ryan a bit and focus on a few different things that would go on to separate the game from the film it was spiritually based on.
Read more
Movies
How Saving Private Ryan’s Best Picture Loss Changed the Oscars Forever
By David Crow
Movies
Audrey Hepburn: The Secret WW2 History of a Dutch Resistance Spy
By David Crow
Medal of Honor: A Different Kind of War Game
It was easy enough to cut Medal of Honor’s violence (or at least its gore), but when it came to addressing concerns of the game’s commercialization and gamification of war and the experience of soldiers, the team found some more creative solutions.
For instance, you may notice that the original Medal of Honor is a much more “low-key” shooter and WW2 game compared to other titles at the time and those that would follow. Well, part of that tone was based on Spielberg’s desire to have the game tell more of a story through gameplay than other shooters had done up until that point (an innovation in and of itself), but a lot of that comes from the input of Dale Dye. 
As Dye taught the team what it was like to be a soldier and serve during WW2, they gained a perspective that they felt the need to share. This is part of the reason why Medal of Honor features a lot of text and cutscene segments designed to teach parts of the history of the war that would have otherwise likely been left on the cutting room floor. There’s a documentary feel to that title that you still don’t see in a lot of period-specific games. 
That decision may have also helped saved Medal of Honor in the long run. When Bucha raised his concerns about the project, the team took them seriously enough to consider canceling the game just months before its release. However, producer Peter Hirschmann extended an invite to Bucha so that he could see what exactly it was that they were working on. 
It was a bold move that proved to pay off as Bucha was so impressed with the game’s direction (a direction that changed drastically in development) that he actually ended up officially supporting the title. Maybe it wasn’t as grand and impactful as Saving Private Ryan, but he saw the team was doing something that was so much more than just a high score and gore.
Medal of Honor proved to be a hit in 1999, but the celebration was impacted by the news that DWI had been sold to EA. The good news was that most of the key members of the DWI team were able to stay together to work on 2000’s Medal of Honor: Underground: a criminally overlooked game that told a brilliant story about a French Resistance fighter modeled after the legendary Hélène Deschamps Adams. That game advanced the unique style of the original game and retained its quality. I highly recommend you play it if you’ve never done so.
Soon, though, everything would change in a way that brought the series directly back to Saving Private Ryan in ways that are stil being felt to this day.
Medal of Honor: Allied Assault – The (Mostly) Unofficial Saving Private Ryan Game
EA decided to continue the Medal of Honor series but without the old Medal of Honor team at the helm. The story goes that they initially asked id Software to develop the next Medal of Honor game, but the id team said they were too busy and instead recommended they ask a studio called 2015 Games to further the franchise.  
Never heard of them? I’m not surprised. The team’s previous work hadn’t set the world on fire, but EA took id’s recommendation to heart and asked the young developers to start working on what would become 2002’s Medal of Honor: Allied Assault. 
It’s funny, but for such an important game, we really don’t know a lot about the details of Allied Assault’s development. It’s been said that the game’s development was pretty rough (the young team apparently struggled to combine their separate ideas under one creative vision), and we also know that they contacted Dale Dye for authenticity input just as the DWI team had done. 
What we don’t exactly know is why Allied Assault was designed to so closely resemble Saving Private Ryan.
It’s easy to assume that the developers were just big fans of the movie (who wasn’t back then?), but there are elements of Allied Assault that are essentially pulled directly from the movie. A lot of the dialog is slightly reworked Saving Private Ryan lines, some of the characters are carbon copies of the film’s leads, and certain missions are pulled directly from the most memorable events of the movie. 
The most famous example of that last point has to be the game’s infamous beach assault mission. At times nearly a 1-1 recreation of Saving Private Ryan’s opening scene, some people believe to this day that the game actually starts with that mission just as the movie began with a similar sequence. While 2003’s Medal of Honor: Frontline (which borrowed heavily from Allied Assault despite being developed by a different team) did start with a Normandy Beach invasion, that sequence doesn’t happen in Allied Assault until you reach the third mission. 
Regardless, it’s the part of the game everyone seems to remember all these years later. Objectively a technical accomplishment that recreated the sensation of watching Saving Private Ryan’s infamous opener in a way that nothing else really had, that beach sequence also stood in direct contrast to much of the game that came before it. The early parts of Allied Assault were a little quieter and modeled more after the “adventure/espionage” style of the original games in the series. From that point though, Allied Assault essentially served as a Saving Private Ryan video game. One of the game’s final missions even nearly recreates the sniping sequence from the finale of that film.
It’s almost like the 2015 Games team was working on a “one for you, one for me” program. Here’s more of the Medal of Honor that came before, but here’s this absolutely intense action game that not only recalls Saving Private Ryan but in some ways directly challenges it. At a time when movie studios were still looking for sequences that would rival Normandy, the Allied Assault team used that sequence as the basis for a compelling argument that gaming was more than ready to match and perhaps surpass the most intense moments in film history. 
The idea that 2015 was going rogue a bit with their ambitions may be supported by the fact that EA eventually decided all future Medal of Honor games would be developed in-house. This came as a shock to the 2015 Games team who felt they did a fantastic job and were practically drowning in accolades at that time. 
Desperate to stay afloat, the 2015 team put a call out to studios to let them know that most of the people responsible for one of 2002’s best games (and a shooter some called the best since Half-Life) were ready and able to continue their work under a different name. 
Activision ended up answering their call, and that project became 2003’s Call of Duty. Before Call of Duty went on to become one of the most successful and profitable franchises in video game history, it was this brilliant single-player focused WW2 shooter made by a studio then known as Infinity Ward. Almost every one of that game’s levels was on the level of that infamous beach sequence from Allied Assault. Infinity Ward’s ability to consistently deliver that kind of intensity set a new standard that some will tell you has never been truly surpassed. 
The story of what happened to Medal of Honor is a touch sadder.
Medal of Honor’s Complicated Legacy and Saving Private Ryan’s Lasting Influence
While 2002’s Medal of Honor: Frontline was called the game of the year by many outlets, subsequent games in the series garnered decidedly more mixed receptions. 2003’s Medal of Honor: Rising Sun was no match for Call of Duty, just as 2004 ‘s Medal of Honor: Pacific Assault couldn’t hold a candle to Call of Duty 2 in the minds of many. As time went on, the Medal of Honor franchise attempted to mimic Call of Duty in more and more overt ways. The results could generously be described as mixed. 
For a series that started out with a direct line to Saving Private Ryan, it’s a little ironic that Medal of Honor was eventually defined and defeated by a studio that was more willing to directly embrace that movie’s style, story, and best moments. Perhaps, the early Medal of Honor games weren’t in the best position to emulate Saving Private Ryan so directly from a technological and content standpoint, but there’s something sad about the ways that Medal of Honor initially tried to distinguish itself as more than a Saving Private Ryan adaptation have been lost slightly in favor of simply walking the path forged by one of the most influential films of the last 25 years. Allied Assault and the early Call of Duty games deserve the praise they’ve received, but it’s hard not to wonder what might have been if more games looked at how Medal of Honor initially distinguished itself and went for something different.
But even fallen (or mostly fallen) franchises can leave a lasting legacy. As far as Medal of Honor goes, nobody summarized its legacy quite so elegantly as Max Spielberg: the kid whose GoldenEye sessions helped inspire the development of the first Medal of Honor game. 
“Medal of Honor is one of the few great marriages of game and film,” said Spielberg. “It was that first rickety bridge built between the silver screen and the home console.”
cnx.cmd.push(function() { cnx({ playerId: "106e33c0-3911-473c-b599-b1426db57530", }).render("0270c398a82f44f49c23c16122516796"); });
Maybe the first Medal of Honor games didn’t exactly recreate Saving Private Ryan, but they aimed for that level of success in a way that most studios would have never dreamed of. There are times when it’s easy to take for granted how video games can make the best movies come to life. What we should never forget are the contributions of the developers who turned our whispers of “Could you imagine playing a game that looks like that?” that we hoped wouldn’t echo in a crowded theater into the games we know and love today. 
The post How Saving Private Ryan Influenced Medal of Honor and Changed Gaming appeared first on Den of Geek.
from Den of Geek https://ift.tt/3dmFkPJ
0 notes
jackrgaines · 5 years ago
Text
Divi vs. Elementor: Which WordPress Page Builder Is Right for Your Site?
The post Divi vs. Elementor: Which WordPress Page Builder Is Right for Your Site? appeared first on HostGator Blog.
If you’re interested in getting a website up and running and want to do it yourself, then WordPress is an excellent bet.
WordPress is the most popular content management system and powers 35.2% of all websites. WordPress also gets increasingly easier to self-navigate as the days and years progress, and there are several excellent WordPress page builder software programs that will help you through the process of building your website.
With all of the different website builders on the market, though, how is a novice to know which one is the best? Well, it depends on what you’re looking for, how much you already know about website building and your budget.
To help you make an informed decision, here is an in-depth review of two of the most popular WordPress page builders on the market, Divi vs. Elementor.
Tumblr media
What is Divi?
You may already know Divi as one of the most popular WordPress themes, but it’s more than that. Divi is also a website building platform that makes building a WordPress website significantly easier. Divi also includes several visual features that help you make your website more visually appealing.
Tumblr media
Let’s take a closer look at some of the most impressive features of the Divi WordPress builder. 
Features of DIVI
Here is what you can expect feature-wise when you select Divi as your WordPress page builder.
Drag & drop building. Divi makes it easy to add, delete, and move elements around as you’re building your website. The best part is you don’t have to know how to code. All of the design is done on the front end of your site, not the back-end.
Real-time visual editing. You can design your page and see how it looks as you go. Divi provides many intuitive visual features that help you make your page look how you want it to without having to know anything technical about web design.
Custom CSS controls. If you do have custom CSS, you can combine it with Divi’s visual editing controls. If you don’t know what this means, no worries. You can stick to a theme or the drag and drop builder.
Responsive editing. You don’t have to worry about whether or not your website will be mobile responsive. It will be. Plus, you can edit how your website will look on a mobile device with Divi’s various responsive editing tools.
Robust design options. Many WordPress builders have only a few design options. Divi allows you full design control over your website.
Tumblr media
Inline text editing. All you have to do to edit your copy is click on the place where you want your text to appear and start typing.
Save multiple designs. If you’re not sure exactly how you want your website to look before you publish it, you can create multiple custom designs, save them, and decide later. You can also save your designs to use as templates for future pages. This helps your website stay consistent and speed up the website creation process.
Global elements and styles. Divi allows you to manage your design with website-wide design settings, allowing you to build a whole website, not just a page.
Easy revisions. You can quickly undo, redo, and make revisions as you design.
Pros of Divi
Why would you want to choose Divi vs. Elementor? Here are the top advantages of Divi to consider as you make your decision.
More templates. Divi has over 800 predesigned templates and they are free to use. If you don’t want to design your own website, simply pick one of the templates that best matches your style.
Tumblr media
Full website packs. Not only does Divi have a wide range of pre-designed templates, but they also offer entire website packs, based on various industries and types of websites (e.g., business, e-commerce, health, beauty, services, etc.).  This makes it easy to quickly design a website that matches your needs.
In-line text editing. The in-line text editing feature is an excellent feature. All you have to do is point and click and you can edit any block of text.
Lots of content modules. Divi has over 30 customizable content modules. You can insert these modules (e.g., CTA buttons, email opt-in forms, maps, testimonials, video sliders, countdown timers, etc.) in your row and column layouts.
Creative freedom. You really have a lot of different options when it comes to designing your website. If you can learn how to use all of the various features, you’ll be able to build a nice website without having to know anything about coding.
Cons of Divi
Before you decide to hop on the Divi bandwagon, it’s essential to consider potential drawbacks. Here are the cons of the Divi WordPress website builder to help you make a more informed decision.
No pop-up builder. Unfortunately, Divi doesn’t include a pop-up builder. Pop-ups are a great way to draw attention to announcements, promotions, and a solid way to capture email subscribers. 
Too many options. While Divi has so many builder options that you can do nearly anything, some reviewers believe that all of the options are too many options. This can distract from the simplicity of use.
Learning curve. Since there are so many features with Divi, it can take some extra time to learn how to effectively use them all.
The Divi theme is basic. It’s critical to remember that the Divi theme and the Divi WordPress builder are two different things. You can use the Divi WordPress builder with any WordPress theme, including the Divi theme. However, if you opt for the Divi theme, it’s nice to know that some reviewers think the Divi theme is a bit basic. You may want to branch out and find a more suitable theme.
Glitchy with longer pages. Some reviewers also say that Divi can get glitchy when trying to build longer pages. This shouldn’t be too much of a problem if you’re only looking for a basic website.
What is Elementor?
Elementor is an all-in-one WordPress website builder solution where you can control every piece of your website design from one platform.
Like Divi, Elementor also provides a flexible and simple visual editor that makes it easy to create a gorgeous website, even if you have no design experience.
Elementor also touts their ability to help you build a website that loads faster and that you can build quickly.
Features of Elementor
You already know what Divi can do. Here is what you can expect feature-wise when you sign up with Elementor vs. Divi.
Drag and drop builder. Elementor also includes a drag and drop website builder, so you can create your website without knowing how to code. It also provides live editing so you can see how your site looks as you go. 
Tumblr media
All design elements together. With Elementor, you don’t have to switch between various screens to design and to make changes and updates. All your content, including your header, footer, and website content, are editable from the same page.
Save and reuse elements and widgets. You can save design elements and widgets in your account and reuse them on other pages. This helps you save time and keep your pages consistent across your website.
300+ templates. Elementor has a pre-designed template for every possible website need and industry. If you don’t trust your drag and drop design skills, then simply pick one of the pre-designed templates. Of course, you can customize the theme with the drag and drop feature, but there is no need to start from scratch.
Responsive mobile editor. It’s no longer an option to have a website that isn’t mobile responsive. Elementor makes it a point to help you customize the way your website looks on a desktop and a mobile device, so you are catering to all your website visitors, not just those visiting from a desktop computer.
Pop-up builder. The use of pop-ups is a strategic way to draw attention to a promotion, an announcement, or your email list. Elementor’s pro plan helps you make pixel-perfect popups, including advanced targeting options.
Tumblr media
Over 90 widgets. You can choose from over 90 widgets that will help you quickly create the design elements you need to incorporate into your website. These widgets help you add things like buttons, forms, headlines, and more to your web pages.
Pros of Elementor
Here is a quick overview of the pros of the Elementor. If these advantages are important to you, Elementor may be the perfect fit for you.
Rich in features. Elementor is one of the best WordPress builders on the market and has tons of different features to help you create a quality website.
Maximum layout control. Elementor’s interface is extremely intuitive, and the design features are easy to use. You don’t have to train yourself on how to use Elementor. You just login, and start working.
Easy to use. For the most part, Elementor’s drag and drop interface is easy to use. You can choose from different premade blocks, templates, and widgets.
Finder search tool. In the event you can’t find something easily with Elementor, you can turn your attention to the search window, type in the feature or page you’re looking for, and Elementor will direct you to it.
Always growing. Elementor’s team is always working to stay ahead of the curve by pushing out new features often.
WooCommerce builder. Elementor has a nice WooCommerce Builder in their pro package. It’s easy to design your eCommerce website without having to know how to code. Widgets you can use on your product page include an add to cart button, product price, product title, product description, product image, upsells, product rating, related products, product stock, and more.
Integrations. Elementor provides various marketing integrations that most website owners use on their sites. Integrations include AWeber, Mailchimp, Drip, ActiveCampaign, ConvertKit, HubSpot, Zapier, GetResponse, MailerLite, and MailPoet. WordPress plugins include WooCommerce, Yoast, ACF, Toolset, and PODS. Social integrations include Slack, Discord, Facebook SDK, YouTube, Vimeo, Dailymotion, SoundCloud, and Google Maps. Other integrations include Adobe Fonts, Google Fonts, Font Awesome 5, Font Awesome Pro, Custom Icon Libraries, and reCAPTCHA. There are also many 3rd party add-ons and you can build your own integrations.
Cons of Elementor
As with any website builder, there are advantages and disadvantages. Here are the cons of Elementor to consider when making your choice between Divi vs. Elementor.
Less templates than Divi. Elementor only has 300+ templates as opposed to Divi’s 800+. While there are fewer templates, however, they are still well-designed and will help you build a beautiful website. Some people may actually consider this an advantage, because there are fewer templates to sort through, and it doesn’t take up as much of your time to choose a template.
Outdated UI. Some reviewers say the Elementor user interface is outdated, making some features more difficult to find and use. It will be interesting to see if and how Elementor innovates its user interface in the future.
Issues with editing mode. Sometimes the website will look different when in editing mode. This can be frustrating for some users.
Margin and padding adjustability issue. When using the drag and drop builder, you can’t adjust the margin and padding, according to some reviewers.
Customer support. It can be difficult to quickly get in touch with a customer support team member and to quickly get custom solutions to your issues.
No white label. Elementor doesn’t come with a white label option.
Problems with third-party add-ons. While Elementor allows for a lot of third-party add-ons, these add-ons often cause issues.
Divi vs. Elementor: Which Will You Choose?
Regardless of which website builder you select, Divi or Elementor, you’ll need a web hosting company to park your WordPress website. 
HostGator provides a secure and affordable managed WordPress hosting plans that start at only $5.95 a month. Advantages include 2.5x the speed, advanced security, free migrations, a free domain, a free SSL certificate, and more.
Check out HostGator’s managed WordPress hosting now, and start building your WordPress website.
Find the post on the HostGator Blog
from HostGator Blog https://www.hostgator.com/blog/divi-vs-elementor-wordpress-page-builder/
0 notes
suzanneshannon · 6 years ago
Text
Request with Intent: Caching Strategies in the Age of PWAs
Once upon a time, we relied on browsers to handle caching for us; as developers in those days, we had very little control. But then came Progressive Web Apps (PWAs), Service Workers, and the Cache API—and suddenly we have expansive power over what gets put in the cache and how it gets put there. We can now cache everything we want to… and therein lies a potential problem.
Media files—especially images—make up the bulk of average page weight these days, and it’s getting worse. In order to improve performance, it’s tempting to cache as much of this content as possible, but should we? In most cases, no. Even with all this newfangled technology at our fingertips, great performance still hinges on a simple rule: request only what you need and make each request as small as possible.
To provide the best possible experience for our users without abusing their network connection or their hard drive, it’s time to put a spin on some classic best practices, experiment with media caching strategies, and play around with a few Cache API tricks that Service Workers have hidden up their sleeves.
Best intentions
All those lessons we learned optimizing web pages for dial-up became super-useful again when mobile took off, and they continue to be applicable in the work we do for a global audience today. Unreliable or high latency network connections are still the norm in many parts of the world, reminding us that it’s never safe to assume a technical baseline lifts evenly or in sync with its corresponding cutting edge. And that’s the thing about performance best practices: history has borne out that approaches that are good for performance now will continue being good for performance in the future.
Before the advent of Service Workers, we could provide some instructions to browsers with respect to how long they should cache a particular resource, but that was about it. Documents and assets downloaded to a user’s machine would be dropped into a directory on their hard drive. When the browser assembled a request for a particular document or asset, it would peek in the cache first to see if it already had what it needed to possibly avoid hitting the network.
We have considerably more control over network requests and the cache these days, but that doesn’t excuse us from being thoughtful about the resources on our web pages.
Request only what you need
As I mentioned, the web today is lousy with media. Images and videos have become a dominant means of communication. They may convert well when it comes to sales and marketing, but they are hardly performant when it comes to download and rendering speed. With this in mind, each and every image (and video, etc.) should have to fight for its place on the page. 
A few years back, a recipe of mine was included in a newspaper story on cooking with spirits (alcohol, not ghosts). I don’t subscribe to the print version of that paper, so when the article came out I went to the site to take a look at how it turned out. During a recent redesign, the site had decided to load all articles into a nearly full-screen modal viewbox layered on top of their homepage. This meant requesting the article required requests for all of the assets associated with the article page plus all the contents and assets for the homepage. Oh, and the homepage had video ads—plural. And, yes, they auto-played.
I popped open DevTools and discovered the page had blown past 15 MB in page weight. Tim Kadlec had recently launched What Does My Site Cost?, so I decided to check out the damage. Turns out that the actual cost to view that page for the average US-based user was more than the cost of the print version of that day’s newspaper. That’s just messed up.
Sure, I could blame the folks who built the site for doing their readers such a disservice, but the reality is that none of us go to work with the goal of worsening our users’ experiences. This could happen to any of us. We could spend days scrutinizing the performance of a page only to have some committee decide to set that carefully crafted page atop a Times Square of auto-playing video ads. Imagine how much worse things would be if we were stacking two abysmally-performing pages on top of each other!
Media can be great for drawing attention when competition is high (e.g., on the homepage of a newspaper), but when you want readers to focus on a single task (e.g., reading the actual article), its value can drop from important to “nice to have.” Yes, studies have shown that images excel at drawing eyeballs, but once a visitor is on the article page, no one cares; we’re just making it take longer to download and more expensive to access. The situation only gets worse as we shove more media into the page. 
We must do everything in our power to reduce the weight of our pages, so avoid requests for things that don’t add value. For starters, if you’re writing an article about a data breach, resist the urge to include that ridiculous stock photo of some random dude in a hoodie typing on a computer in a very dark room.
Request the smallest file you can
Now that we’ve taken stock of what we do need to include, we must ask ourselves a critical question: How can we deliver it in the fastest way possible? This can be as simple as choosing the most appropriate image format for the content presented (and optimizing the heck out of it) or as complex as recreating assets entirely (for example, if switching from raster to vector imagery would be more efficient).
Offer alternate formats
When it comes to image formats, we don’t have to choose between performance and reach anymore. We can provide multiple options and let the browser decide which one to use, based on what it can handle.
You can accomplish this by offering multiple sources within a picture or video element. Start by creating multiple formats of the media asset. For example, with WebP and JPG, it’s likely that the WebP will have a smaller file size than the JPG (but check to make sure). With those alternate sources, you can drop them into a picture like this:
<picture> <source srcset="my.webp" type="image/webp"> <img src="my.jpg" alt="Descriptive text about the picture."> </picture>
Browsers that recognize the picture element will check the source element before making a decision about which image to request. If the browser supports the MIME type “image/webp,” it will kick off a request for the WebP format image. If not (or if the browser doesn’t recognize picture), it will request the JPG. 
The nice thing about this approach is that you’re serving the smallest image possible to the user without having to resort to any sort of JavaScript hackery.
You can take the same approach with video files:
<video controls> <source src="my.webm" type="video/webm"> <source src="my.mp4" type="video/mp4"> <p>Your browser doesn’t support native video playback, but you can <a href="my.mp4" download>download</a> this video instead.</p> </video>
Browsers that support WebM will request the first source, whereas browsers that don’t—but do understand MP4 videos—will request the second one. Browsers that don’t support the video element will fall back to the paragraph about downloading the file.
The order of your source elements matters. Browsers will choose the first usable source, so if you specify an optimized alternative format after a more widely compatible one, the alternative format may never get picked up.  
Depending on your situation, you might consider bypassing this markup-based approach and handle things on the server instead. For example, if a JPG is being requested and the browser supports WebP (which is indicated in the Accept header), there’s nothing stopping you from replying with a WebP version of the resource. In fact, some CDN services—Cloudinary, for instance—come with this sort of functionality right out of the box.
Offer different sizes
Formats aside, you may want to deliver alternate image sizes optimized for the current size of the browser’s viewport. After all, there’s no point loading an image that’s 3–4 times larger than the screen rendering it; that’s just wasting bandwidth. This is where responsive images come in.
Here’s an example:
<img src="medium.jpg" srcset="small.jpg 256w, medium.jpg 512w, large.jpg 1024w" sizes="(min-width: 30em) 30em, 100vw" alt="Descriptive text about the picture.">
There’s a lot going on in this super-charged img element, so I’ll break it down:
This img offers three size options for a given JPG: 256 px wide (small.jpg), 512 px wide (medium.jpg), and 1024 px wide (large.jpg). These are provided in the srcset attribute with corresponding width descriptors.
The src defines a default image source, which acts as a fallback for browsers that don’t support srcset. Your choice for the default image will likely depend on the context and general usage patterns. Often I’d recommend the smallest image be the default, but if the majority of your traffic is on older desktop browsers, you might want to go with the medium-sized image.
The sizes attribute is a presentational hint that informs the browser how the image will be rendered in different scenarios (its extrinsic size) once CSS has been applied. This particular example says that the image will be the full width of the viewport (100vw) until the viewport reaches 30 em in width (min-width: 30em), at which point the image will be 30 em wide. You can make the sizes value as complicated or as simple as you want; omitting it causes browsers to use the default value of 100vw.
You can even combine this approach with alternate formats and crops within a single picture. 🤯
All of this is to say that you have a number of tools at your disposal for delivering fast-loading media, so use them!
Defer requests (when possible)
Years ago, Internet Explorer 11 introduced a new attribute that enabled developers to de-prioritize specific img elements to speed up page rendering: lazyload. That attribute never went anywhere, standards-wise, but it was a solid attempt to defer image loading until images are in view (or close to it) without having to involve JavaScript.
There have been countless JavaScript-based implementations of lazy loading images since then, but recently Google also took a stab at a more declarative approach, using a different attribute: loading.
The loading attribute supports three values (“auto,” “lazy,” and “eager”) to define how a resource should be brought in. For our purposes, the “lazy” value is the most interesting because it defers loading the resource until it reaches a calculated distance from the viewport.
Adding that into the mix…
<img src="medium.jpg" srcset="small.jpg 256w, medium.jpg 512w, large.jpg 1024w" sizes="(min-width: 30em) 30em, 100vw" loading="lazy" alt="Descriptive text about the picture.">
This attribute offers a bit of a performance boost in Chromium-based browsers. Hopefully it will become a standard and get picked up by other browsers in the future, but in the meantime there’s no harm in including it because browsers that don’t understand the attribute will simply ignore it.
This approach complements a media prioritization strategy really well, but before I get to that, I want to take a closer look at Service Workers.
Manipulate requests in a Service Worker
Service Workers are a special type of Web Worker with the ability to intercept, modify, and respond to all network requests via the Fetch API. They also have access to the Cache API, as well as other asynchronous client-side data stores like IndexedDB for resource storage.
When a Service Worker is installed, you can hook into that event and prime the cache with resources you want to use later. Many folks use this opportunity to squirrel away copies of global assets, including styles, scripts, logos, and the like, but you can also use it to cache images for use when network requests fail.
Keep a fallback image in your back pocket
Assuming you want to use a fallback in more than one networking recipe, you can set up a named function that will respond with that resource:
function respondWithFallbackImage() { return caches.match( "/i/fallbacks/offline.svg" ); }
Then, within a fetch event handler, you can use that function to provide that fallback image when requests for images fail at the network:
self.addEventListener( "fetch", event => { const request = event.request; if ( request.headers.get("Accept").includes("image") ) { event.respondWith( return fetch( request, { mode: 'no-cors' } ) .then( response => { return response; }) .catch( respondWithFallbackImage ); ); } });
When the network is available, users get the expected behavior:
Tumblr media
Social media avatars are rendered as expected when the network is available.
But when the network is interrupted, images will be swapped automatically for a fallback, and the user experience is still acceptable:
Tumblr media
A generic fallback avatar is rendered when the network is unavailable.
On the surface, this approach may not seem all that helpful in terms of performance since you’ve essentially added an additional image download into the mix. With this system in place, however, some pretty amazing opportunities open up to you.
Respect a user’s choice to save data
Some users reduce their data consumption by entering a “lite” mode or turning on a “data saver” feature. When this happens, browsers will often send a Save-Data header with their network requests. 
Within your Service Worker, you can look for this header and adjust your responses accordingly. First, you look for the header:
let save_data = false; if ( 'connection' in navigator ) { save_data = navigator.connection.saveData; }
Then, within your fetch handler for images, you might choose to preemptively respond with the fallback image instead of going to the network at all:
self.addEventListener( "fetch", event => { const request = event.request; if ( request.headers.get("Accept").includes("image") ) { event.respondWith( if ( save_data ) { return respondWithFallbackImage(); } // code you saw previously ); } });
You could even take this a step further and tune respondWithFallbackImage() to provide alternate images based on what the original request was for. To do that you’d define several fallbacks globally in the Service Worker:
const fallback_avatar = "/i/fallbacks/avatar.svg", fallback_image = "/i/fallbacks/image.svg";
Both of those files should then be cached during the Service Worker install event:
return cache.addAll( [ fallback_avatar, fallback_image ]);
Finally, within respondWithFallbackImage() you could serve up the appropriate image based on the URL being fetched. In my site, the avatars are pulled from Webmention.io, so I test for that.
function respondWithFallbackImage( url ) { const image = avatars.test( /webmention\.io/ ) ? fallback_avatar                                                 : fallback_image;   return caches.match( image ); }
With that change, I’ll need to update the fetch handler to pass in request.url as an argument to respondWithFallbackImage(). Once that’s done, when the network gets interrupted I end up seeing something like this:
Tumblr media
A webmention that contains both an avatar and an embedded image will render with two different fallbacks when the Save-Data header is present.
Next, we need to establish some general guidelines for handling media assets—based on the situation, of course.
The caching strategy: prioritize certain media
In my experience, media—especially images—on the web tend to fall into three categories of necessity. At one end of the spectrum are elements that don’t add meaningful value. At the other end of the spectrum are critical assets that do add value, such as charts and graphs that are essential to understanding the surrounding content. Somewhere in the middle are what I would call “nice-to-have” media. They do add value to the core experience of a page but are not critical to understanding the content.
If you consider your media with this division in mind, you can establish some general guidelines for handling each, based on the situation. In other words, a caching strategy.
Media loading strategy, broken down by how critical an asset is to understanding an interface Media category Fast connection Save-Data Slow connection No network Critical Load media Replace with placeholder Nice-to-have Load media Replace with placeholder Non-critical Remove from content entirely
When it comes to disambiguating the critical from the nice-to-have, it’s helpful to have those resources organized into separate directories (or similar). That way we can add some logic into the Service Worker that can help it decide which is which. For example, on my own personal site, critical images are either self-hosted or come from the website for my book. Knowing that, I can write regular expressions that match those domains:
const high_priority = [ /aaron\-gustafson\.com/, /adaptivewebdesign\.info/ ];
With that high_priority variable defined, I can create a function that will let me know if a given image request (for example) is a high priority request or not:
function isHighPriority( url ) { // how many high priority links are we dealing with? let i = high_priority.length; // loop through each while ( i-- ) { // does the request URL match this regular expression? if ( high_priority[i].test( url ) ) { // yes, it’s a high priority request return true; } } // no matches, not high priority return false; }
Adding support for prioritizing media requests only requires adding a new conditional into the fetch event handler, like we did with Save-Data. Your specific recipe for network and cache handling will likely differ, but here was how I chose to mix in this logic within image requests:
// Check the cache first // Return the cached image if we have one // If the image is not in the cache, continue // Is this image high priority? if ( isHighPriority( url ) ) { // Fetch the image // If the fetch succeeds, save a copy in the cache // If not, respond with an "offline" placeholder // Not high priority } else { // Should I save data? if ( save_data ) { // Respond with a "saving data" placeholder // Not saving data } else { // Fetch the image // If the fetch succeeds, save a copy in the cache // If not, respond with an "offline" placeholder } }
We can apply this prioritized approach to many kinds of assets. We could even use it to control which pages are served cache-first vs. network-first.
Keep the cache tidy
The  ability to control which resources are cached to disk is a huge opportunity, but it also carries with it an equally huge responsibility not to abuse it.
Every caching strategy is likely to differ, at least a little bit. If we’re publishing a book online, for instance, it might make sense to cache all of the chapters, images, etc. for offline viewing. There’s a fixed amount of content and—assuming there aren’t a ton of heavy images and videos—users will benefit from not having to download each chapter separately.
On a news site, however, caching every article and photo will quickly fill up our users’ hard drives. If a site offers an indeterminate number of pages and assets, it’s critical to have a caching strategy that puts hard limits on how many resources we’re caching to disk. 
One way to do this is to create several different blocks associated with caching different forms of content. The more ephemeral content caches can have strict limits around how many items can be stored. Sure, we’ll still be bound to the storage limits of the device, but do we really want our website to take up 2 GB of someone’s hard drive?
Here’s an example, again from my own site:
const sw_caches = { static: { name: `${version}static` }, images: { name: `${version}images`, limit: 75 }, pages: { name: `${version}pages`, limit: 5 }, other: { name: `${version}other`, limit: 50 } }
Here I’ve defined several caches, each with a name used for addressing it in the Cache API and a version prefix. The version is defined elsewhere in the Service Worker, and allows me to purge all caches at once if necessary.
With the exception of the static cache, which is used for static assets, every cache has a limit to the number of items that may be stored. I only cache the most recent 5 pages someone has visited, for instance. Images are limited to the most recent 75, and so on. This is an approach that Jeremy Keith outlines in his fantastic book Going Offline (which you should really read if you haven’t already—here’s a sample).
With these cache definitions in place, I can clean up my caches periodically and prune the oldest items. Here’s Jeremy’s recommended code for this approach:
function trimCache(cacheName, maxItems) { // Open the cache caches.open(cacheName) .then( cache => { // Get the keys and count them cache.keys() .then(keys => { // Do we have more than we should? if (keys.length > maxItems) { // Delete the oldest item and run trim again cache.delete(keys[0]) .then( () => { trimCache(cacheName, maxItems) }); } }); }); }
We can trigger this code to run whenever a new page loads. By running it in the Service Worker, it runs in a separate thread and won’t drag down the site’s responsiveness. We trigger it by posting a message (using postMessage()) to the Service Worker from the main JavaScript thread:
// First check to see if you have an active service worker if ( navigator.serviceWorker.controller ) { // Then add an event listener window.addEventListener( "load", function(){ // Tell the service worker to clean up navigator.serviceWorker.controller.postMessage( "clean up" ); }); }
The final step in wiring it all up is setting up the Service Worker to receive the message:
addEventListener("message", messageEvent => { if (messageEvent.data == "clean up") { // loop though the caches for ( let key in sw_caches ) { // if the cache has a limit if ( sw_caches[key].limit !== undefined ) { // trim it to that limit trimCache( sw_caches[key].name, sw_caches[key].limit ); } } } });
Here, the Service Worker listens for inbound messages and responds to the “clean up” request by running trimCache() on each of the cache buckets with a defined limit.
This approach is by no means elegant, but it works. It would be far better to make decisions about purging cached responses based on how frequently each item is accessed and/or how much room it takes up on disk. (Removing cached items based purely on when they were cached isn’t nearly as useful.) Sadly, we don’t have that level of detail when it comes to inspecting the caches…yet. I’m actually working to address this limitation in the Cache API right now.
Your users always come first
The technologies underlying Progressive Web Apps are continuing to mature, but even if you aren’t interested in turning your site into a PWA, there’s so much you can do today to improve your users’ experiences when it comes to media. And, as with every other form of inclusive design, it starts with centering on your users who are most at risk of having an awful experience.
Draw distinctions between critical, nice-to-have, and superfluous media. Remove the cruft, then optimize the bejeezus out of each remaining asset. Serve your media in multiple formats and sizes, prioritizing the smallest versions first to make the most of high latency and slow connections. If your users say they want to save data, respect that and have a fallback plan in place. Cache wisely and with the utmost respect for your users’ disk space. And, finally, audit your caching strategies regularly—especially when it comes to large media files.Follow these guidelines, and every one of your users—from folks rocking a JioPhone on a rural mobile network in India to people on a high-end gaming laptop wired to a 10 Gbps fiber line in Silicon Valley—will thank you.
Request with Intent: Caching Strategies in the Age of PWAs published first on https://deskbysnafu.tumblr.com/
0 notes
nancydsmithus · 6 years ago
Text
Monthly Web Development Update 8/2019: Strong Teams And Ethical Data Sensemaking
Monthly Web Development Update 8/2019: Strong Teams And Ethical Data Sensemaking
Anselm Hannemann
2019-08-16T13:51:00+02:002019-08-16T12:19:12+00:00
What’s more powerful than a star who knows everything? Well, a team not made of stars but of people who love what they do, stand behind their company’s vision and can work together, support each other. Like a galaxy made of stars — where not every star shines and also doesn’t need to. Everyone has their place, their own strength, their own weakness. Teams don’t consist only of stars, they consist of people, and the most important thing is that the work and life culture is great. So don’t do a moonshot if you’re hiring someone but try to look for someone who fits into your team and encourages, supports your team’s values and members.
In terms of your own life, take some time today to take a deep breath and recall what happened this week. Go through it day by day and appreciate the actions, the negative ones as well as the positive ones. Accept that negative things happen in our lives as well, otherwise we wouldn’t be able to feel good either. It’s a helpful exercise to balance your life, to have a way of invalidating the feeling of “I did nothing this week” or “I was quite unproductive.” It makes you understand why you might not have worked as much as you’re used to — but it feels fine because there’s a reason for it.
News
Three weeks ago we officially exhausted the Earth’s natural resources for the year — with four months left in 2019. Earth Overshoot Day is a good indicator of where we’re currently at in the fight against climate change and it’s a great initiative by people who try to give helpful advice on how we can move that date so one day in the (hopefully) near future we’ll reach overshoot day not before the end of the year or even in a new year.
Chrome 76 brings the prefers-color-scheme media query (e.g. for dark mode support) and multiple simplifications for PWA installation.
UI/UX
There are times to use toggle switches and times not to. When designers misuse them, it leads to confused and frustrated users. Knowing when to use them requires an understanding of the different types of toggle states and options.
Font Awesome introduced Duotone Icons. An amazing set that is worth taking a look at.
JavaScript
Ben Frain explores the possibility of building a Progressive Web Application (PWA) without a framework. A quite interesting article series that shows the difference between relying on frameworks by default and building things from scratch.
Web Performance
Some experiments sound silly but in reality, they’re not: Chris Ashton used the web for a day on a 50MB budget. In Zimbabwe, for example, where 1 GB costs an average of $75.20, ranging from $12.50 to $138.46, 50MB is incredibly expensive. So reducing your app bundle size, image size, and website cost are directly related to how happy your users are when they browse your site or use your service. If it costs them $3.76 (50MB) to access your new sports shoe teaser page, it’s unlikely that they will buy or recommend it.
BBC’s Toby Cox shares how they ditched iframes in favor of ShadowDOM to improve their site performance significantly. This is a good piece explaining the advantages and drawbacks of iframes and why adopting ShadowDOM takes time and still feels uncomfortable for most of us.
Craig Mod shares why people prefer to choose (and pay for) fast software. People are grateful for it and are easily annoyed if the app takes too much time to start or shows a laggy user interface.
Harry Roberts explains the details of the “time to first byte” metric and why it matters.
CSS
Yes, prefers-reduced-motion isn’t super new anymore but still heavily underused on the web. Here’s how to apply it to your web application to serve a user’s request for reduced motion.
HTML & SVG
With Chrome 76 we get the loading attribute which allows for native lazy loading of images just with HTML. It’s great to have a handy article that explains how to use, debug, and test it on your website today.
Tumblr media
No more custom lazy-loading code or a separate JavaScript library needed: Chrome 76 comes with native lazy loading built in. (Image credit)
Accessibility
The best algorithms available today still struggle to recognize black faces equally good as white ones. Which again shows how important it is to have diverse teams and care about inclusiveness.
Security
Here’s a technical analysis of the Capital One hack. A good read for anyone who uses Cloud providers like AWS for their systems because it all comes down to configuring accounts correctly to prevent hackers from gaining access due to a misconfigured cloud service user role.
Privacy
Safari introduced its Intelligent Tracking Prevention technology a while ago. Now there’s an official Safari ITP policy documentation that explains how it works, what will be blocked and what not.
SmashingMag launched a print and eBook magazine all about ethics and privacy. It contains great pieces on designing for addiction, how to improve ethics step by step, and quieting disquiet. A magazine worth reading.
Work & Life
“For a long time I believed that a strong team is made of stars — extraordinary world-class individuals who can generate and execute ideas at a level no one else can. These days, I feel that a strong team is the one that feels more like a close family than a constellation of stars. A family where everybody has a sense of predictability, trust and respect for each other. A family which deeply embodies the values the company carries and reflects these values throughout their work. But also a family where everybody feels genuinely valued, happy and ignited to create,” said Vitaly Friedman in an update thought recently and I couldn’t agree more.
How do you justify a job in a company that has a significant influence on our world and our everyday lives and that not necessarily with the best intentions? Meredith Whittaker wrote up her story of starting at Google, having an amazing time there, and now leaving the company because she couldn’t justify it anymore that Google is using her work and technology to get involved in fossil energy business, healthcare, governance, and transportation business — and not always with the focus on improving everyone’s lives or making our environment a better place to live in but simply for profit.
Synchronous meetings are a problem in nearly every company. They take a lot of time from a lot of people and disrupt any schedule or focused work. So here’s how Buffer switched to asynchronous meetings, including great tips and insights into why many tools out there don’t work well.
Actionable advice is what we usually look for when reading an article. However, it’s not always possible or the best option to write actionable advice and certainly not always a good idea to follow actionable advice blindly. That’s because most of the time actionable advice also is opinionated, tailored, customized advice that doesn’t necessarily fit your purpose. Sharing experiences instead of actionable advice fosters creativity so everyone can find their own solution, their own advice.
Sam Clulow’s “Our Planet, Our Problem” is a great piece of writing that reminds us of who we are and what’s important for us and how we can live in a city and switch to a better, more thoughtful and natural life.
Climate change is a topic all around the world now and it seems that many people are concerned about it and want to take action. But then, last month we had the busiest air travel day ever in history. Airplanes are accountable for one of the biggest parts of climate active emissions, so it’s key to reduce air travel as much as possible from today on. Coincidentally, this was also the hottest week measured in Europe ever. We as individuals need to finally cut down on flights, regardless of how tempting that next $50-holiday-flight to a nice destination might be, regardless of if it’s an important business meeting. What do we have video conferencing solutions for? Why do people claim to work remotely if they then fly around the world dozens of times in their life? There are so many nice destinations nearby, reachable by train or, if needed, by car.
Tumblr media
The team at Buffer shares what worked and what didn’t work for them when they switched to asynchronous meetings. (Image credit)
Going Beyond…
Leo Babauta shares a tip on how to stop overthinking by cutting through indecision. We will never have the certainty we’d like to have in our lives so it’s quite good to have a strategy for dealing with uncertainty. As I’m struggling with this a lot, I found the article helpful.
The ethical practices that can serve as a code of conduct for data sensemaking professionals are built upon a single fundamental principle. It is the same principle that medical doctors swear as an oath before becoming licensed: Do no harm. Here’s “Ethical Data Sensemaking.”
Paul Hayes shares his experience from trying to live plastic-free for a month and why it’s hard to stick to it. It’s surprising how shopping habits need to be changed and why you need to spend your money in a totally different way and cannot rely on online stores anymore.
Oil powers the cars we drive and the flights we take, it heats many of our homes and offices. It is in the things we use every day and it plays an integral role across industries and economies. Yet it has become very clear that the relentless burning of fossil fuels cannot continue unabated. Can the world be less reliant on oil?
Uber and Lyft admit that they’re making traffic congestion worse in cities. Next time you use any of those new taxi apps, try to remind yourself that you’re making the situation worse for many people in the city.
Thank you for reading. If you like what I write, please consider supporting the Web Development Reading List.
—Anselm
Tumblr media
(cm)
0 notes
batch83 · 6 years ago
Text
Grit, Focus & KonMari
Session 01 : Leadership I
Speaker: Dr. Cristina Liamzon
Cristina Liamzon is building a global community of empowered Filipino migrant workers through a leadership and education program that encourages them to become drivers of change in the Philippines or in their host countries. Read her Ashoka Fellowship bio here.
What key things did you learn from the session?
My key takeaways for this session are:
How to be an effective servant leader (without title) using the 6 Principles of Leadership
Learning the responsibilities of Migrants in the society
Being a changemaker and forging a legacy
Why are these learnings important and why did they have such an impact on you?
For the larger part of my overseas working life, it's mostly about the monetary gain and what they can do for me & my family. But I find that's there's never enough salary increase that can satisfy a person, unless I can have an endgame, an ultimate dream. With that as a jump off point, I began to rearrange stuff to formulate a plan. With that, I found my new motivation.
Dr. Liamzon talking about focus and determination resonates so much to me because I think I found what I wanted to do for the rest of my working life. Talking about grit resonates so much to me because it's what I have seen in my parents -- how they've navigated the challenges of raising a family in the most meager resources. I think my earliest model for leadership may be from my Mum. She has a natural talent in communicating with people and thus earning their trust and respect. She has an eye for opportunities and the ability to deliver on what she said she'd do. She accomplished so much more that most of her educated siblings and I often wonder just how far she would have gone & how she would have turned out if she had finished higher education.
I also look up to my Dad as my model for focus and motivation. As a young lad from the rural area, he made his way to the city and put himself to university while working and raising our family. He never stopped learning. There's always a new course or seminar in technical or agriculture or TESDA that he's signing up for even after his retirement.
Lastly, I look up to my grandfather as my all-time favourite model in leadership. He embodied most of the 6 principles for most of his adult life. He was a respected member of our hometown without holding any title or position of power. I mostly admire how he has helped shape a number of young men's lives who are under his employ during his heydays. Even decades after his death, his legacy is fondly remembered.
Tumblr media
How will you apply these learnings in your private life/ work life?
I think the most obvious way to start my journey towards a better self is to Marie-Kondo my bad habits that don't add value. Most prominent of these is the decluttering of social media feeds that mess up with my psyche and muddle my focus. I've been a constant practitioner of taking out clutter from my house every quarter or so, so why not mute/block/uninstall/minimise the things that mindlessly pulls my attention to my phone.
Working on my Self-Mastery Index [^1] is important in keeping with the qualities of a person of Integrity[^2] and giving honour to my word. As an engineering staff, keeping one's word is important to ensure the smooth work flow especially during critical times in the design. The "Under-promise, Over-deliver Rule" has always been an effective strategy in the construction/design world.
I realise;
…that confronting problems instead of people is something that I need to work on moving forward, in both my professional and personal life.
…that as a parent I need to set an example to my son, for my actions and habits will be remembered more than what I say.
…and that by striving and giving encouragement to achieve excellence, I am giving myself and others room to grow, to not be complacent and to not be limited by being "best" but being "better".[^3]
[^1] : Ratio between promises made and those that are kept (to oneself and to others)
[^2] : Integrity is more than just keeping your words, it giving honour to them. Honouring your word is a 2-step process: (a) Keeping your word and doing it on time; (b) Whenever you will not be keeping your word, just as soon as you become aware that you will not be keeping your word (& not keeping your word on time), saying to everyone that's impacted; (i) That you will not be keeping your word, (ii) That you will keep that word in the future, and by when, OR, that you won’t be keeping that word at all, and (iii) what you will do to deal with the impact on others of the failure to keep your word (or to keep it on time).
[^3] : The Danish Way of Parenting (Jessica Joelle Alexander, Iben Sandahl; 1996) ~ Authenticity in raising kids and Why honesty creates a stronger sense of self. How authentic praise can be used to form a growth mind-set rather than a fixed mind-set, making your children more resilient
0 notes
the-grendel-khan · 8 years ago
Text
Letter to an interested student.
I had the good luck to chat with a high-school student who was interested in doing the most good she could do with hacker skills. So I wrote the letter I wish someone had written me when I was an excitable, larval pre-engineer. Here it is, slightly abridged.
Hi! You said you were interested in learning IT skills and using them for the greater good. I've got some links for learning to code, and opportunities for how to use those skills. There's a lot to read in here--I hope you find it useful!
First, on learning to code. You mentioned having a Linux environment set up, which means that you have a Python runtime readily available. Excellent! There are a lot of resources available, a lot of languages to choose from. I recommend Python--it's easy to learn, it doesn't have a lot of sharp edges, and it's powerful enough to use professionally (my current projects at work are in Python). And in any case, mathematically at least, all programming languages are equally powerful; they just make some things easier or more difficult.
I learned a lot of Python by doing Project Euler; be warned that the problems do get very challenging, but I had fun with them. (I'd suggest attempting them in order.) I've heard good things about Zed Shaw's Learn Python the Hard Way, as well, though I haven't used that method to teach myself anything. It can be very, very useful to have a mentor or community to work with; I suggest finding a teacher who's happy to help you with your code, or at the very least sign up for stackoverflow, a developer community and a very good place to ask questions. (See also /r/learnprogramming's FAQ.) The really important thing here is that you have something you want to do with the skills you want to learn. (As it is written, "The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth.") Looking at my miscellaneous-projects directory on my laptop, the last thing I wrote was a Python script to download airport diagrams from the FAA's website (via some awful screenscraping logic), convert them from PDFs to SVGs, and upload them to Wikimedia Commons. It was something I was doing by hand, and then I automated it. I've also used R (don't use R if you can help it; it's weird and clunky) to make choropleth maps for internet arguments, and more Python to shuffle data to make Wikipedia graphs. It's useful to think of programming as powered armor for your brain.
You asked about ethical hacking. Given that the best minds of my generation are optimizing ad clicks for revenue, this is a really virtuous thing to want to do! So here's what I know about using IT skills for social good.
I mentioned the disastrous initial launch of healthcare.gov; TIME had a narrative of what happened there; see also Mikey Dickerson (former SRE manager at Google)'s speech to SXSW about recruiting for the United States Digital Service. The main public-service organizations in the federal government are 18F (a sort of contracting organization in San Francisco) and the United States Digital Service, which works on larger projects and tries to set up standards. The work may sound unexciting, but it's extraordinarily vital--veterans getting their disability, immigrants not getting stuck in limbo, or a child welfare system that works. It's easy to imagine that providing services starts and ends with passing laws, but if our programs don't actually function, people don't get the benefits or services we fought to allocate to them. (See also this TED talk.)
The idea is that most IT professionals spend a couple of years in public service at one of these organizations before going into the industry proper. (I'm not sure what the future of 18F/USDS is under the current administration, but this sort of thing is less about what policy is and more about basic competence in executing it.)
For a broader look, you may appreciate Bret Victor's "What Can a Technologist Do About Climate Change?", or consider Vi Hart and Nicky Case's "Parable of the Polygons", a cute web-based 'explorable' which lets you play with Thomas Schelling's model of housing segregation (i.e., you don't need actively bitter racism in order to get pretty severe segregation, which is surprising).
For an idea of what's at stake with certain safety-critical systems, read about the Therac-25 disaster and the Toyota unintended-acceleration bug. (We're more diligent about testing the software we use to put funny captions on cat pictures than they were with the software that controls how fast the car goes.) Or consider the unintended consequences of small, ubiquitous devices.
And for an example of what 'white hat' hacking looks like, consider Google's Project Zero, which is a group of security researchers finding and reporting vulnerabilities in widely-used third-party software. Some of their greatest hits include "Cloudbleed" (an error in a proxying service leading to private data being randomly dumped into web pages en masse), "Rowhammer" (edit memory you shouldn't be able to control by exploiting physical properties of RAM chips), and amazing bug reports for products like TrendMicro Antivirus.
To get into that sort of thing, security researchers read reports like those linked above, do exercises like "capture the flag" (trying to break into a test system), and generally cultivate a lateral mode of thinking--similar to what stage magicians do, in a way. (Social engineering is related to, and can multiply the power of, traditional hacking; Kevin Mitnick's "The Art of Deception" is a good read. He gave a public talk a few years ago; I think that includes his story of how he stole proprietary source code from Motorola with nothing but an FTP drop, a call to directory assistance and unbelievable chutzpah.)
The rest of this is more abstract, hacker-culture advice; it's less technical, but it's the sort of thing I read a lot of on my way here.
For more about ethical hacking, I'd be remiss if I didn't mention Aaron Swartz; he was instrumental in establishing Creative Commons licensing, the RSS protocol, the Markdown text-formatting language, Reddit and much else. As part of his activism, he mass-harvested academic journal articles from JSTOR using a guest account at MIT. The feds arrested him and threatened him with thirty-five years in prison, and he took his own life before going to trial. It's one of the saddest stories of the internet age, I think, and it struck me particularly because it seemed like the kind of thing I'd have done, if I'd been smarter, more civic-minded, and more generally virtuous. There's a documentary, The Internet's Own Boy, about him.
Mark Pilgrim is a web-standards guy who previously blogged a great deal, but disappeared from public (internet) life around 2011. He wrote about the freedom to tinker, early internet history, long-term preservation (see also), and old-school copy protection, among other things.
I'll leave you with two more items. First, a very short talk, "wat", by Gary Bernhardt, on wacky edge cases in programming language. And second, a book recommendation. If you haven't read it before, Gödel, Escher, Bach is a wonderfully fun and challenging read; it took me most of my senior year of high school to get through it, but I'd never quite read anything like it. It's not directly about programming, but it's a marvelous example of the hacker mindset. MIT OpenCourseWare has a supplemental summer course (The author's style isn't for everyone; if you do like it, his follow-up Le Ton beau de Marot (about language and translation) is also very, very good.)
I hope you enjoy; please feel free to send this around to your classmates--let me know if you have any more specific questions, or any feedback. Thanks!
4 notes · View notes
ienajah · 5 years ago
Text
Jidoka versus automation
Jidoka versus automation
  The most striking characteristic of automation in manufacturing is that, while making progress, it has consistently fallen short of expectations. In Player Piano, Kurt Vonnegut articulated the 1950s vision of automated factories: integrated machines produce everything while their former operators are unemployed and the managers spend their time playing silly team-building games at offsite meetings. 60 years on, the most consistently implemented part of Vonnegut’s vision is the silly team-building games… Nippon Steel’s Yawata Steel Works in Kitakyushu, Japan, produce as much today with 3,000 employees as they did with 40,000 in 1964, and this transition was accomplished without generating massive unemployment. There are other such limited areas of automation success, like the welding and painting of car bodies. When manufacturing jobs are lost today, it is almost never to automation and almost always to cheaper human competition elsewhere. In the words of an experienced operator in a plant making household goods in the US, “When I joined 25 years ago, I expected these jobs to be automated soon, but we’re still doing them the same way.” What is holding up automation today is not technology but the lack of consideration for people. There are entire books on automation without a paragraph on what their roles should be. Of course, a fully automatic, “lights-out” factory has nobody working inside, so why bother? There are at least two reasons. First, even an automatic plant needs people, to program its processes, tell it what work to do, maintain it, monitor its operations and respond to emergencies. Second, successful automation is incremental and cannot be developed without the help of the people working in the plants throughout the migration. Enter autonomation, or jidoka, which is sometimes also called “automation with a human touch” but really should be called human-centered automation. Instead of systems of machines and controls, it is about human-machine interactions. In the classical House of Lean model, the two pillars holding up the roof at Just-In-Time and Autonomation, or Jidoka. Figure 1 is lifted from the introduction to Working with Machines, and shows what happens when the jidoka pillar is ignored. More and more, the Lean literature in English uses the japanese word jidoka rather than autonomation, but with its scope reduced to the idea of stopping production whenever anything goes wrong, and the concept is tucked away under the umbrella of Quality Management. Toyota’s jidoka is a tricky term, because it is an untranslatable pun. Originally, the Japanese word for automation is jidoka (自動化) , literally meaning “transformation into something that moves by itself.” What Toyota did is add the human radical 人 to the character 動  for “move,” turning it into the character 働 for “work,” which is still pronounced “do” but changes the meaning to “transformation into something that works by itself.” It”s automation with the human radical added, but it is still automation, with all the technical issues the term implies. The discussion of automation in the first draft of Working with Machines started with the following historical background, which was edited out like the chapter on locomotives and typewriters, on the ground that it contained no actionable recommendations. In this blog, I can let you be the judge of its value. From tea-serving wind-up dolls to autonomation The word automation was first used by Ford manufacturing Vice President Delmar Harder in 1947 for devices transferring materials between operations. He set as targets a payback period of at most one year in labor savings, which meant in practice that each device should not cost more than 15% above an operator’s average yearly wages and eliminate at least one operator. While this kind of economic analysis is still used, from the perspective of Toyota’s system, Ford’s focus on materials handling was putting the integration cart before the unit operation horse. Toyota’s approach focuses on individual operations first, and only then addresses movements of parts between them. In 1952, John Diebold broadened the meaning of automation to what has become the common usage, and painted a picture of the near future that was consistent with Kurt Vonnegut’s. At that time, automatic feedback control was perceived to be the key enabling technology for automation, to be applied to ever larger and more complex systems. It was not a new concept, having been applied since 1788 in the centrifugal governor regulating pressure in a steam engine (See Figure 2) Applying electronics to feedback control in World War II had made it possible, for example, to move a tank’s gun turret to a target angle just by turning a knob. Postwar progress in the theory and application of feedback control both caused many contemporary thinkers, like Norbert Wiener, to see in the concept a philosophical depth that is truly not there, and to underestimate what else would need to be done in order to achieve automation. Of course, if you cannot tell a machine to take a simple step and expect it to be executed accurately and precisely, then not much else matters. Once you can, however, you are still faced with the problem of sequencing these steps to get a manufacturing job done. While automatic feedback control was historically central to the development of automatic systems, it is not at center stage in manufacturing automation today. With sufficiently stable processes, open-loop systems work fine, or feedback control is buried deep inside such off-the-shelf components as mass flow controllers, thermostats, or humidity controllers. Manufacturing engineers are occasionally aware of it in the form of variable-speed drives or adaptive control for machine tools, but other issues dominate. Fixed-sequence and even logic programming also have a history that is as long as that of feedback control and are by no means easier to achieve. Figure 2 shows two examples of 18th century automata moved by gears, levers and cams through sequences that are elaborate but fixed. These concepts found their way into practical applications in manufacturing as soon as 1784, with Oliver Evans’s continuous flour millthat integrated five water-powered machines through bucket elevators, conveyors and chutes (See Figure 3). The same kind of thinking later led to James Bonsack’s cigarette making machine in 1881, and to the kind of automatic systems that have dominated high-volume processing and bottling or cartonning plants for 100 years, and to the transfer lines that have been used in automotive machining since World War II. Fixed-sequence automation works, but only in dedicated lines for products with takt times under 1 second, where the investment is justifiable and flexibility unnecessary. Rube Goldberg machines parody this type of automation. Figure 3. Winner of the 2008 Penn State Rube Goldberg machine contest Automation with flexibility is of course a different goal, and one that has been pursued almost as long, through programmable machines. The earliest example used in production is the Jacquard loom from 1801, shown in Figure 4. It is also considered a precursor to the computer, but it was not possible to make a wide variety of machines programmable until the actual computer was not only invented but made sufficiently small, cheap and easy to use, which didn’t occur until decades after Vonnegut and Diebold were writing. By the mid 1980’s, the needed technology existed, but the vision of automation remained unfulfilled. In fact, more technology was available than the human beings on the shop floor, in engineering, and in management knew what to do with. As discussed in the post on Opinels and Swiss knives, the computer as a game changer. In manufacturing, this was not widely recognized when it became true, and it still is not today. Writing in 1952, John Diebold saw nothing wrong with the way manufacturing was done in the best US plants, nor did he have any reason to, as the entire world was looking at the US as a model for management in general and manufacturing in particular. In the 1980’s, however, when GM invested $40B in factory automation, it was automating processes that were no longer competitive and, by automating them, making them more difficult to improve. Whether the automation pioneers’ vision will ever come true is in question. So far, every time one obstacle has been overcome, another one has taken its place. Once feedback control issues were resolved came the challenge of machine programming. Next is the need to have a manufacturing concept that is worth automating, as opposed to an obsolete approach to flow and unit processes. And finally, the human interface issues discussed must be addressed. 21st century manufacturers do not make automation their overall strategy. Instead, automation is a tool. In a particular cell, for example, one operator is used only 20% of the time, and a targeted automation retrofit to one of the machines in the cell may be the key to eliminating this 20% and pulling the operator out of the cell.
  المصدر : Jidoka versus automation
0 notes
technato · 7 years ago
Text
Carnegie Mellon is Saving Old Software from Oblivion
A prototype archiving system called Olive lets vintage code run on today’s computers
.carousel-inner{ height:525px !important; }
Illustration: Nicholas Little
In early 2010, Harvard economists Carmen Reinhart and Kenneth Rogoff published an analysis of economic data from many countries and concluded that when debt levels exceed 90 percent of gross national product, a nation’s economic growth is threatened. With debt that high, expect growth to become negative, they argued.
This analysis was done shortly after the 2008 recession, so it had enormous relevance to policymakers, many of whom were promoting high levels of debt spending in the interest of stimulating their nations’ economies. At the same time, conservative politicians, such as Olli Rehn, then an EU commissioner, and U.S. congressman Paul Ryan, used Reinhart and Rogoff’s findings to argue for fiscal austerity.
Three years later, Thomas Herndon, a graduate student at the University of Massachusetts, discovered an error in the Excel spreadsheet that Reinhart and Rogoff had used to make their calculations. The significance of the blunder was enormous: When the analysis was done properly, Herndon showed, debt levels in excess of 90 percent were associated with average growth of positive 2.2 percent, not the negative 0.1 percent that Reinhart and Rogoff had found.
Herndon could easily test the Harvard economists’ conclusions because the software that they had used to calculate their results—Microsoft Excel—was readily available. But what about much older findings for which the software originally used is hard to come by?
You might think that the solution—preserving the relevant software for future researchers to use—should be no big deal. After all, software is nothing more than a bunch of files, and those files are easy enough to store on a hard drive or on tape in digital format. For some software at least, the all-important source code could even be duplicated on paper, avoiding the possibility that whatever digital medium it’s written to could become obsolete.
Saving old programs in this way is done routinely, even for decades-old software. You can find online, for example, a full program listing for the Apollo Guidance Computer—code that took astronauts to the moon during the 1960s. It was transcribed from a paper copy and uploaded to GitHub in 2016.
While perusing such vintage source code might delight hard-core programmers, most people aren’t interested in such things. What they want to do is use the software. But keeping software in ready-to-run form over long periods of time is enormously difficult, because to be able to run most old code, you need both an old computer and an old operating system.
You might have faced this challenge yourself, perhaps while trying to play a computer game from your youth. But being unable to run an old program can have much more serious repercussions, particularly for scientific and technical research.
Along with economists, many other researchers, including physicists, chemists, biologists, and engineers, routinely use software to slice and dice their data and visualize the results of their analyses. They simulate phenomena with computer models that are written in a variety of programming languages and that use a wide range of supporting software libraries and reference data sets. Such investigations and the software on which they are based are central to the discovery and reporting of new research results.
Imagine that you’re an investigator and want to check calculations done by another researcher 25 years ago. Would the relevant software still be around? The company that made it may have disappeared. Even if a contemporary version of the software exists, will it still accept the format of the original data? Will the calculations be identical in every respect—for example, in the handling of rounding errors—to those obtained using a computer of a generation ago? Probably not.
Researchers’ growing dependence on computers and the difficulty they encounter when attempting to run old software are hampering their ability to check published results. The problem of obsolescent software is thus eroding the very premise of reproducibility—which is, after all, the bedrock of science.
The issue also affects matters that could be subject to litigation. Suppose, for example, that an engineer’s calculations show that a building design is robust, but the roof of that building nevertheless collapses. Did the engineer make a mistake, or was the software used for the calculations faulty? It would be hard to know years later if the software could no longer be run.
That’s why my colleagues and I at Carnegie Mellon University, in Pittsburgh, have been developing ways to archive programs in forms that can be run easily today and into the future. My fellow computer scientists Benjamin Gilbert and Jan Harkes did most of the required coding. But the collaboration has also involved software archivist Daniel Ryan and librarians Gloriana St. Clair, Erika Linke, and Keith Webster, who naturally have a keen interest in properly preserving this slice of modern culture.
Bringing Back Yesterday’s Software
The Olive system has been used to create 17 different virtual machines that run a variety of old software, some serious, some just for fun. Here are several views from those archived applications
1/8
NCSA Mosaic 1.0, a pioneering Web browser for the Macintosh from 1993.
2/8
Chaste (Cancer, Heart and Soft Tissue Environment) 3.1 for Linux from 2013.
<img src="https://spectrum.ieee.org/image/MzEzMTUzMg.jpeg&quot; data-original="/image/MzEzMTUzMg.jpeg" id="618441086_2" alt="The Oregon Trail 1.1, a game for the Macintosh from 1990.”> 3/8
The Oregon Trail 1.1, a game for the Macintosh from 1990.
<img src="https://spectrum.ieee.org/image/MzEzMTUzNQ.jpeg&quot; data-original="/image/MzEzMTUzNQ.jpeg" id="618441086_3" alt="Wanderer, a game for MS-DOS from 1988.”> 4/8
Wanderer, a game for MS-DOS from 1988.
<img src="https://spectrum.ieee.org/image/MzEzMTU1MA.jpeg&quot; data-original="/image/MzEzMTU1MA.jpeg" id="618441086_4" alt="Mystery House, a game for the Apple II from 1982.”> 5/8
Mystery House, a game for the Apple II from 1982.
6/8
The Great American History Machine, an educational interactive atlas for Windows 3.1 from 1991.
7/8
Microsoft Office 4.3 for Windows 3.1 from 1994.
8/8
ChemCollective, educational chemistry software for Linux from 2013.
$(document).ready(function(){ $(‘#618441086’).carousel({ pause: true, interval: false }); });
Because this project is more one of archival preservation than mainstream computer science, we garnered financial support for it not from the usual government funding agencies for computer science but from the Alfred P. Sloan Foundation and the Institute for Museum and Library Services. With that support, we showed how to reconstitute long-gone computing environments and make them available online so that any computer user can, in essence, go back in time with just a click of the mouse.
We created a system called Olive—an acronym for Open Library of Images for Virtualized Execution. Olive delivers over the Internet an experience that in every way matches what you would have obtained by running an application, operating system, and computer from the past. So once you install Olive, you can interact with some very old software as if it were brand new. Think if it as a Wayback Machine for executable content.
To understand how Olive can bring old computing environments back to life, you have to dig through quite a few layers of software abstraction. At the very bottom is the common base of much of today’s computer technology: a standard desktop or laptop endowed with one or more x86 microprocessors. On that computer, we run the Linux operating system, which forms the second layer in Olive’s stack of technology.
Sitting immediately above the operating system is software written in my lab called VMNetX, for Virtual Machine Network Execution. A virtual machine is a computing environment that mimics one kind of computer using software running on a different kind of computer. VMNetX is special in that it allows virtual machines to be stored on a central server and then executed on demand by a remote system. The advantage of this arrangement is that your computer doesn’t need to download the virtual machine’s entire disk and memory state from the server before running that virtual machine. Instead, the information stored on disk and in memory is retrieved in chunks as needed by the next layer up: the virtual-machine monitor (also called a hypervisor), which can keep several virtual machines going at once.
Each one of those virtual machines runs a hardware emulator, which is the next layer in the Olive stack. That emulator presents the illusion of being a now-obsolete computer—for example, an old Macintosh Quadra with its 1990s-era Motorola 68040 CPU. (The emulation layer can be omitted if the archived software you want to explore runs on an x86-based computer.)
The next layer up is the old operating system needed for the archived software to work. That operating system has access to a virtual disk, which mimics actual disk storage, providing what looks like the usual file system to still-higher components in this great layer cake of software abstraction.
Above the old operating system is the archived program itself. This may represent the very top of the heap, or there could be an additional layer, consisting of data that must be fed to the archived application to get it to do what you want.
The upper layers of Olive are specific to particular archived applications and are stored on a central server. The lower layers are installed on the user’s own computer in the form of the Olive client software package. When you launch an archived application, the Olive client fetches parts of the relevant upper layers as needed from the central server.
Illustration: Nicholas Little
Layers of Abstraction: Olive requires many layers of software abstraction to create a suitable virtual machine. That virtual machine then runs the old operating system and application.
That’s what you’ll find under the hood. But what can Olive do? Today, Olive consists of 17 different virtual machines that can run a variety of operating systems and applications. The choice of what to include in that set was driven by a mix of curiosity, availability, and personal interests. For example, one member of our team fondly remembered playing The Oregon Trail when he was in school in the early 1990s. That led us to acquire an old Mac version of the game and to get it running again through Olive. Once word of that accomplishment got out, many people started approaching us to see if we could resurrect their favorite software from the past.
The oldest application we’ve revived is Mystery House, a graphics-enabled game from the early 1980s for the Apple II computer. Another program is NCSA Mosaic, which people of a certain age might remember as the browser that introduced them to the wonders of the World Wide Web.
Olive provides a version of Mosaic that was written in 1993 for Apple’s Macintosh System 7.5 operating system. That operating system runs on an emulation of the Motorola 68040 CPU, which in turn is created by software running on an actual x86-based computer that runs Linux. In spite of all this virtualization, performance is pretty good, because modern computers are so much faster than the original Apple hardware.
Pointing Olive’s reconstituted Mosaic browser at today’s Web is instructive: Because Mosaic predates Web technologies such as JavaScript, HTTP 1.1, Cascading Style Sheets, and HTML 5, it is unable to render most sites. But you can have some fun tracking down websites composed so long ago that they still look just fine.
What else can Olive do? Maybe you’re wondering what tools businesses were using shortly after Intel introduced the Pentium processor. Olive can help with that, too. Just fire up Microsoft Office 4.3 from 1994 (which thankfully predates the annoying automated office assistant “Clippy”).
Perhaps you just want to spend a nostalgic evening playing Doom for DOS—or trying to understand what made such first-person shooter games so popular in the early 1990s. Or maybe you need to redo your 1997 taxes and can’t find the disk for that year’s version of TurboTax in your attic. Have no fear: Olive has you covered.
On the more serious side, Olive includes Chaste 3.1. The name of this software is short for Cancer, Heart and Soft Tissue Environment. It’s a simulation package developed at the University of Oxford for computationally demanding problems in biology and physiology. Version 3.1 of Chaste was tied to a research paper published in March 2013. Within two years of publication, though, the source code for Chaste 3.1 no longer compiled on new Linux releases. That’s emblematic of the challenge to scientific reproducibility Olive was designed to address.
Illustration: Nicholas Little
To keep Chaste 3.1 working, Olive provides a Linux environment that’s frozen in time. Olive’s re-creation of Chaste also contains the example data that was published with the 2013 paper. Running the data through Chaste produces visualizations of certain muscle functions. Future physiology researchers who wish to explore those visualizations or make modifications to the published software will be able to use Olive to edit the code on the virtual machine and then run it.
For now, though, Olive is available only to a limited group of users. Because of software-licensing restrictions, Olive’s collection of vintage software is currently accessible only to people who have been collaborating on the project. The relevant companies will need to give permissions to present Olive’s re-creations to broader audiences.
We are not alone in our quest to keep old software alive. For example, the Internet Archive is preserving thousands of old programs using an emulation of MS-DOS that runs in the user’s browser. And a project being mounted at Yale, called EaaSI (Emulation as a Service Infrastructure), hopes to make available thousands of emulated software environments from the past. The scholars and librarians involved with the Software Preservation Network have been coordinating this and similar efforts. They are also working to address the copyright issues that arise when old software is kept running in this way.
Olive has come a long way, but it is still far from being a fully developed system. In addition to the problem of restrictive software licensing, various technical roadblocks remain.
One challenge is how to import new data to be processed by an old application. Right now, such data has to be entered manually, which is both laborious and error prone. Doing so also limits the amount of data that can be analyzed. Even if we were to add a mechanism to import data, the amount that could be saved would be limited to the size of the virtual machine’s virtual disk. That may not seem like a problem, but you have to remember that the file systems on older computers sometimes had what now seem like quaint limits on the amount of data they could store.
Another hurdle is how to emulate graphics processing units (GPUs). For a long while now, the scientific community has been leveraging the parallel-processing power of GPUs to speed up many sorts of calculations. To archive executable versions of software that takes advantage of GPUs, Olive would need to re-create virtual versions of those chips, a thorny task. That’s because GPU interfaces—what gets input to them and what they output—are not standardized.
Clearly there’s quite a bit of work to do before we can declare that we have solved the problem of archiving executable content. But Olive represents a good start at creating the kinds of systems that will be required to ensure that software from the past can live on to be explored, tested, and used long into the future.
This article appears in the October 2018 print issue as “Saving Software From Oblivion.”
About the Author
Mahadev Satyanarayanan is a professor of computer science at Carnegie Mellon University, in Pittsburgh.
Carnegie Mellon is Saving Old Software from Oblivion syndicated from https://jiohowweb.blogspot.com
3 notes · View notes
neptunecreek · 5 years ago
Text
Google's AMP, the Canonical Web, and the Importance of Web Standards
Have you ever clicked on a link after googling something, only to find that Google didn’t take you to the actual webpage but to some weird Google-fied version of it? Instead of the web address being the source of the article, it still says “google” in the address bar on your phone? That’s what’s known as Google Accelerated Mobile Pages (AMP), and now Google has announced that AMP has graduated from the OpenJS Foundation Incubation Program. The OpenJS Foundation is a merged effort between major projects in the JavaScript ecosystem, such as NodeJS and jQuery, whose stated mission is “to support the healthy growth of the JavaScript and web ecosystem”. But instead of a standard starting with the web community, a giant company is coming to the community after they’ve already built a large part of the mobile web and are asking for a rubber stamp. Web community discussion should be the first step of making web standards, and not just a last-minute hurdle for Google to clear.
What Is AMP?
This Google-backed, stripped down HTML framework was created with the promises of creating faster web pages for a better user experience. Cutting out slower loading content, like those developed with JavaScript. At a high level, AMP works by fast loading stripped down versions of full web pages for mobile viewing.
The Google AMP project was announced in late 2015 with the promise of providing publishers a faster way of serving and distributing content to their users. This also was marketed as a more adaptable approach than Apple News and Facebook Instant Articles. AMP pages began making an appearance by 2016. But right away, many observed that AMP encroached on the principles of the open web. The web was built on open standards, developed through consensus, that small and large actors alike can use. Which, in this case, entails keeping open web standards in the forefront and discouraging proprietary, closed standards.
Instead of utilizing standard HTML markup tags, a developer would use AMP tags. For example, here’s what an embedded image looks like in classic HTML, versus what it looks like using AMP:
HTML Image Tag:
<img src=”src.jpg” alt=”src image” />
AMP Image Tag: 
<amp-img src=”src.jpg” width=”900” height=”675” layout=”responsive” />
Since launch page speeds have proven to be faster when using AMP, the technology’s promises aren’t necessarily bad from a speed perspective alone. Of course, there are ways of improving performance other than using AMP, such as minimizing files, building lighter code, CDNs (content delivery networks), and caching. There are also other Google-backed frameworks like PWAs (progressive web applications) and service workers.
AMP has been around for four years now, and the criticisms still carry into today with AMP’s latest progressions around a very important part of the web, the URL.
Canonical URLs and AMP URLs
When you visit a site, maybe your favorite news site, you would normally see the original domain along with an associated path to the page you are on:
https://www.example.com/some-web-page
This, along with it’s SSL certificate would clarify that you are seeing web content served from this site at this URL with a good amount of trust. This is what would be considered a canonical URL.
An AMP URL, however, can look like this:
https://www.example.com/platform/amp/some-web-page
Using canonical URLs, users can more easily verify that the site they’re on is the one they’re trying to visit. But AMP URLs muddied the waters, and made users have to adapt new ways to verify the origins of original content.
Whose Content?
One step further is their structure for pre-rendered pages from cached content. This URL would not be in view of the user, but rather the content (text, images, etc.) served onto the cached page would be coming from the URL below.
https://www-example-com.cdn.ampproject.org/c/www.example.com/amp/doc.html
The final URL, the one in view or the URL bar, of a cached AMP page would look something like this:
https://www.google.com/amp/www.example.com/amp.doc.html
This cache model does not follow the web origin concept and creates a new framework and structure to adhere to. The promise is better performances and experience for users. Yet, the approach is implementation first and web standards later. Since Google has become such an ingrained part of the modern web for so many, any technology they deploy would immediately have a large share of users and adopters. This is also paired with other arguments other product teams within Google have made to reshape the URL as we know it. This fundamentally changed the way the mobile web is served for many users.
Another, more recent development is the support for Signed HTTP Exchanges, or “SXG”, a subset of the Web Packages standard that allows further decoupling of distribution of web content from its origins with cryptographically signed HTTP exchanges (a web page). This is supposed to address the problem, introduced by AMP, that the URL a user sees does not correspond to the page they’re trying to visit. SXG allows the canonical URL (instead of the AMP URL) to be shown in the browser when you arrive, closing the loop back to the original publisher. The positive here is that a web standard was used, but the negative here is the speed of adoption without general consensus from other major stakeholders. Currently, SXG is only supported in Chrome and Chromium based browsers.
Pushing AMP: how did a new “standard” take over?
News publishers were among the first to adopt AMP. Google even partnered with a major CMS (content management system), WordPress, to further promote AMP. Publishers use CMS services to upload, edit, and host content, and WordPress holds about 60% of the market share as the CMS of choice. Publishers also compete on other Google products, such as Google Search. So perhaps some publishers adopted AMP because they thought it would improve SEO (search engine optimization) on one of the web’s most used search engines. However, this argument has been disputed by Google, and they maintain that performance is prioritized no matter what is used to get that page result to that performance measure. Since the Google Search algorithm is mainly in secret, we can only trust these statements at their word. Tangentially, the “Top Stories” feature in Search on mobile has recently dropped AMP as a requirement. 
The AMP project was more closed off in terms of control in the beginning of it’s launch despite the fact it promoted itself as an open source project. Publishers ended up reporting higher speeds, but this was left up to a “time will tell” set of metrics. In conclusion, the statement “you don’t need AMP to rank higher” is often competing with “just use AMP and you will rank higher”. Which can be tempting to publishers trying to reach the performance bar to get their content prioritized.
Web Community First
We should focus less about whether or not AMP is a good tool for performance, and more about how this framework was molded by Google’s initial ownership. The cache layer is owned by Google, and even though it’s not required, most common implementations use this cache feature. Concerns around analytics have been addressed and they have also done the courtesy of allowing other major ad vendors into the AMP model concerning ad content. This is a mere concession though, since Google Analytics has such a large market share of the measured web.
If Google was simply a web performance company that would still be too much centralization of the web’s decisions. But they are not just a one-function company, they are a giant conglomerate that already controls the largest mobile OS, web browser, and search engine in the world. Running the project through the OpenJS Foundation is a more welcome approach. The new governance structure consists of working groups, an advisory committee, and a technical steering committee of people inside and outside of Google. This should bring more voices to the table and structure AMP into a better process for future decisions. This move will allegedly de-couple Google AMP Cache, which hosts pages, from AMP runtime, which is the JavaScript source to process AMP components on a page.
However, this is all well after AMP has been integrated into major news sites, e-commerce, and even nonprofits. So this new model is not an even-ground, democratic approach. No matter the intentions, good or bad, those who work with powerful entities need to check their power at the door if they want a more equitable and usable web. Not acknowledging the power one wields, only enforces a false sense of democracy that didn’t exist.
Furthermore, the web standards process itself is far from perfect. Standards organizations are heavily dominated by members of corporate companies and the connections one may have to them offer immense social capital. Less-represented people don’t have the social capital to join or be a member. It’s a long way until a more equitable process occurs for these types of organizations; paired with the lack of diversity these kinds of groups tend to have, the costs of membership, and time commitments. These particular issues are not Google’s fault, but Google has an immense amount of power when it comes to joining these groups. When joining standards organizations, It’s not a matter of earning their way up, but deciding if they should loosen their reigns.
At this point in time with the AMP project, Google can’t retroactively release the control it had in AMP’s adoption. And we can���t go back to a pre-AMP web to start over. The discussions about whether the AMP project should be removed, or discouraged for a different framework, have long passed. Whether or not users can opt-out of AMP has been decided in many corners of the web. All we can do now is learn from the process, and try to make sure AMP is developed in the best interests of users and publishers going forward. However, the open web shouldn’t be weathered by multiple lessons learned on power and control from big tech companies that obtusely need to re-learn accountability with each new endeavor.
from Deeplinks https://ift.tt/31FKvF6
0 notes
gbhbl · 6 years ago
Text
Machine Head are out on tour celebrating the 25th anniversary of their debut album, Burn My Eyes with returning members, Logan Mader and Chris Kontos.
What better place for a sold out celebration of Machine Head and Burn My Eyes than Brixton Academy! The UK has long held Machine Head in high esteem, staying strong and selling out shows even through the bands tougher times. No matter what, the UK had faith to the point where frontman Rob Flynn has previously stated there were times when the UK felt more like home than home.
These days you are more likely to catch Machine Head, and other bands, in the Camden/Kentish Town area of London. Recent two day stints at The Roundhouse show the scale of Machine Head but go back 10 or 15 years and Brixton Academy was the main venue for metal, including Machine Head.
Even though Burn My Eyes would have been toured at the old Astoria in London, Brixton has always felt like Machine Head’s home. London has always embraced them and to give you a sense of that, this show sold out in less than 8 hours.
So the format for tonight is more of a celebration of Machine Head past and present. The recent format of “An Evening With” remains but with the show split in two halves. The first half being the new, with Machine Head’s new line-up including Wacław Kiełtyka (Vog) and Matt Alston hitting us with 90 minutes or so of a greatest hits set list. Once they finish we will get a short changeover followed by the return of (most of) the original Burn My Eyes line-up. Logan Mader is on guitars and Chris Kontos is on the drums joining Robb and Jared. Rob and Adam Duce mustn’t have made amends enough yet for him to be part of the show.
So one show, but being delivered in two segments. Should be good right? The first half saw the band take to the stage to deafening roars of approval as the deep tones of Imperium blare out. The heavy start continues straight into Take My Scars leaving a writhing mass of bodies in the pit well and truly warmed up. Now We Die gets one of the best sing alongs of the night before the pace ramps up again with Struck a Nerve followed by the masterful Locust. Machine Head, with the new members sound great. Really strong and heavy. It could be my imagination but they do sound a little harder, a little faster and reenergised. I’m not sure if that is true or just my optimism though.
Robb chats to the crowd throughout declaring his love for the fans and reminiscing about the original Burn My Eyes tour. Say what you want about Robb, he is a wonderful frontman and has the rampant crowd eating out of the palm of his hand. I Am Hell and Aesthetics of Hate get aired next and both go down a treat. The band leave the stage for a few minutes now leaving Vog alone for a bit of a guitar solo. It’s pretty cool and he looks to be having fun as he plays around chucking little bits of Pantera’s Floods into the mix. It’s brilliant to see how happy the crowd are to have Vog and Alston in the band. To say they are welcomed by the Machine Head family is a huge understatement.
The gig continues with the crowd favourite Darkness Within which comes preceded by a lengthy but poignant speech from Robb on the importance of music in his life. The song is brilliant but also hits the first little disappointment of the night. Crowds love to extend the ending for as long as possible, something usually encouraged by Robb but here coming essentially a third of the way through the gig, Robb cuts it pretty quick as they need to get on.
#gallery-0-7 { margin: auto; } #gallery-0-7 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 50%; } #gallery-0-7 img { border: 2px solid #cfcfcf; } #gallery-0-7 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Never mind though – music takes over again as Catharsis hits hard, sounding much edgier than the album cut. An old favourite comes next with From This Day. We may all dislike Robb rapping but it doesn’t seem like it when this song goes off and the crowd explodes singing “Time, To see”. A frenzied crowd get torn a new one next as Ten Ton Hammer pulverises us before a huge sing along comes for the first cover of the night with Iron Maiden’s Hallowed be Thy Name. The final song of the first act is, of course, Halo. It really gives Robb and Vog a chance to show their skills with the huge dual guitar solo as they stand back to back and play in perfect unity.
So ends Act 1 of this set for a 10 minute changeover before the Burn My Eyes section. It is strange though – ending with Halo, stopping for an intermission. Just feels a little odd. Still, the return of Robb and Jared and the arrival of Chris Kontos and Logan Mader raises the roof with the level of roaring from the crowd. Real Eyes, Realize, Real Lies plays out on tape before they arrive which is a bit disappointing but understandable. I was looking forward to hearing the guitar bits played out though understood there would be a tape for the samples. Otherwise there are no surprises from the set list.
It is Burn My Eyes, played in order. Davidian leads into Old which leads into A Thousand Lies. All songs I have heard before so while it is cool that Chris and Logan are there, it doesn’t really do much for me. The next chapter of the gig is the best of the night for me personally. None But My Own is a real highlight, not being a regularly played track at all and sounds phenomenal live. Chris is one hell of a drummer and that guy oozes enthusiasm. This shows even more as he gets a few minutes for a drum solo next before the track I was most waiting for gets blazed out with The Rage to Overcome. It is the song of the night for me. Another rarely played track but man is it good. The drums are brilliant and Robb sounds enraged throughout. it is perfect.
Unfortunately as a show, things went a bit wrong after this point. Nothing on the band, purely on us as a crowd and this inability to buy a pint and drink it, rather then launch it at the crowd. Seriously. You spend a good £10 on a 2 pinter and launch it at your fellow fans. You suck!
As bad as it is throwing it, it appears some peoples aims aren’t great either as this time they managed to land their drink on the soundboard. Half way through Death Church the sound cut out and never really recovered. Well done to all the techs for their hard work in getting us back up and running and well done to the venue for extending the curfew by half hour to allow the gig to finish but with all sound routed to the onstage monitors, it has a lesser impact and didn’t really fill the venue afterwards.
#gallery-0-8 { margin: auto; } #gallery-0-8 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 50%; } #gallery-0-8 img { border: 2px solid #cfcfcf; } #gallery-0-8 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
The 30 minute gap sucked a bit of the life out of the gig, being another interruption and while I was proud of the fans who stayed and cheered the band through the mishap, and pleased with the band and staff who battled on, we need to look at ourselves really and think about what we are doing. I know that seems harsh. After all, it is juts one drink from one person that landed on the soundboard but I am seeing, and feeling, drinks being thrown all night by loads of people. It just happens that only one of them hit the soundboard, not that only one was thrown. This is also the second successive gig at Brixton where a drink has landed on the technical equipment and caused a problem. The last being a few months back at Gojira where their lights were taken out.
We need to do better and consider the consequence of our little rush of blood on those around us.
Eventually we get back under way with a missed Death Church and a shortened A Nation on Fire. Blood for Blood hits us hard next before I’m Your God Know lights the place up in pyro. A cover medley comes next with Metallica’s Battery, Rage Against the Machine’s Bulls on Parade and Slayer’s South of Heaven and Raining blood getting an airing before the huge ending, Block. Block is a real treat, though a strange ending song for a Machine Head gig. The sweaty masses that have stayed scream “Fuck It All” impressively before all band members, past and present come out to thank the crowd warmly to rapturous applause.
So, let’s be clear before I start listing things I didn’t like. I thought both versions of Machine Head were immaculate. They played feverishly and with real power. They were fantastic. My issues come from a more personal perspective but a gig is the whole experience and not just the quality of the band playing.
So, what exactly is my problem? Weirdly, I didn’t like the format at all. I thought I would but I didn’t. It was backwards as far as I was concerned. I expected Burn My Eyes to be the first half with the new line-up being the closing part. Nod to the past then a look towards the future. This also would have meant we would have started with Davidian and probably ended with Halo. Instead it felt a bit like, here is a quick glimpse of the future, now, forget that and let’s get back to the past. I personally felt like I would have enjoyed it better the other way round leaving the venue with the sounds of Halo or Darkness Within ringing in my ears.
Again, not entirely the band’s fault but the show didn’t really flow well at all and instead of feeling like an evening with Machine Head, it felt like sections patched together. Those sections being seperated by the covers thrown in, the official changeover and the loss of sound. I would always prefer another track or two from Machine Head over a load of covers anyway so maybe I mentally checked out through these parts? Either way it felt a bit stop start throughout. With the gig running over by 30 minutes it also meant exiting the building and getting home became a lot more rushed and difficult as fans desperately scrambled for the last trains home.
I fully commend the dedicated staff for staying on so Machine Head could finish the important Burn My Eyes part of the gig but why they still played their 6-8 minute long cover medley in the middle of it is beyond me.
So to summarise, I had a great time. I loved the new look Machine Head. I loved seeing the older version. Hearing some old, rarely heard tracks was a dream for me. Robb was on fire, Jared was his usual enthusiastic and solid self. As a band (or bands) they were near faultless but I didn’t love the whole event. The format was weird and backwards to me. An idiot made the stop start nature seem worse than it probably was. Songs felt out of place and I would have happily had a few less covers and a few more Machine Head songs. I think part of the problem is I have seen these guys so many times, and I am a huge fan. That gives me a solid idea of what to expect and also very high expectations.
This was a great Machine Head show where the band were phenomenal but the overall event was less so. Machine Head continue to march forward and, despite the haters, show they are still one of the best live bands out there but, while I completely get the need and want for this current anniversary format, the quicker it is done and we get back to normal, the better.
  (adsbygoogle = window.adsbygoogle || []).push({});
Live Review – Machine Head at O2 Academy Brixton (02/11/2019) Machine Head are out on tour celebrating the 25th anniversary of their debut album, Burn My Eyes with returning members, Logan Mader and Chris Kontos.
0 notes
torentialtribute · 6 years ago
Text
Get Callum Hudson-Odoi to sign a new deal: Frank Lampard’s in-tray if he takes charge of Chelsea
Receive a call to Hudson-Odoi to sign a new deal and determine the future of Peter and Willian: Frank Lampard's investment when he is in charge on Chelsea
Lampard Lampard must convince hugely talented winger Callum Hudson-Odoi to stay
Published: 10:30 BST, June 15, 2019 | manager in an appointment that would certainly please the club's supporters.
If he does indeed return to Stamford Bridge, then Lampard will have a lot of things to deal with immediately.
Sportsmail outlines what will be in Lampard's in-tray when he is called Chelsea boss.
<img id = "i-5b42551331c328dd" src = "https://dailym.ai/2MOTduF image-a-10_1560627507368.jpg "height =" 423 "width =" 634 "alt =" <img id = "i-5b42551331c328dd" src = "https://dailym.ai/2WgE5KP /15/20/14833008-0-image-a-10_1560627507368.jpg "height =" 423 "width =" 634 "alt =" [Frankenstein] [Frankenstein] [Frankenstein] [Frankenstein] <img id = "i-5b42551331c328dd "src =" https://dailym.ai/2WMNLgm "height =" 423 "width =" 634 "alt = "<img id =" i-5b42551331c328dd "src =" https://dailym.ai/2WMNLgm "height =" 423 "width =" 634 "alt =" Frank Lampard has to tackle a large number of problems when he is appointed as Chelsea manager "the Chelsea manager
[1] Get Callum Hudson-Odoi to sign a new deal next year, Lampard & # 39; s assis tent Jody Morris knows him well because he led him into the Under-18 team and they want to tie him up.
<img id = "i-7d9334f1c7c78c06" src = "https://dailym.ai/2MP3F5u image-a-1_1560627193773.jpg "height =" 406 "width =" 634 "alt =" Lampard wants to keep Callum Hudson-Odoi, who has one year left on his contract "<img id =" i-7d9334f1c7c78c06 "src = "https://dailym.ai/2WOvxev" height = "406" width = "634" alt = "Lampard will want to keep Callum Hudson-Odoi, who has one year left from his contract "
Lampard will want to keep Callum Hudson-Odoi, who has canceled another year
2) Dealing with the transfer ban
They already signed Christian Pulisic last January and young people returning from loans such as Tammy Abraham, Mason Mount, Reece James and Fikayo Tomori can fill the gap.
<img id = "i-44ad499c57c63de0" src = "https://dailym.ai/2N02ws2 -2_1560627226163.jpg "height =" 423 "width =" 634 "alt =" <img id = "i-44ad499c57c63de0" src = "https://dailym.ai/2Xhch94 20 / 14832890-0-image-a-2_1560627226163.jpg "height =" 423 "width =" 634 "alt =" Chelsea, who signed Christian Pulisic in January, has been accepted, who signed Christian Pulisic in January, have the prohibition of FIFA
Chelsea, who signed Christian Pulisic in January, effectively accepted
Which youths can play? And what to do with Victor Moses, Michy Batshuayi, Tiemoue Bakayoko, frozen by Maurizio Sarri? Lampard must decide what to do with Chelsea & # 39; s recurring loans like Tiemoue Bakayoko "
<img id =" i-e8f762f724a14a31 "src =" https: //i.dailymail. co.uk/1s/2019/06/15/20/14832906-0-image-a-3_1560627248031.jpg "height =" 427 "width =" 634 "alt =" Lampard must decide what to do with Chelsea & # 39; Returning loans such as Tiemoue Bakayoko
Lampard must decide what to do with Chelsea's Returning loans such as Tiemoue Bakayoko
Lampard must decide what to do with Chelsea & # 39; s recurring loans such as Tiemoue Bakayoko
The contracts of 31-year-old Pedro and 30-year-old Willian both end in 2020 but David Luiz recently got a two-year deal, despite being 32. So they will probably want the same thing.
<img id = "i-55a6432c07cfa3a3" src = "https://dailym.ai/2WN1pjy image-a-5_1560627358593.jpg "height =" 425 "width =" 306 "alt =" <img id = "i-55a6432c07cfa3a3" src = "https://dailym.ai/2WgE5KP /15/20/14832928-0-image-a-5_1560627358593.jpg "height =" 425 "width =" 306 "alt =" Lampard should have no choice but to give Pedro a new deal I don't know if I have a Chelsea player, but I'm not sure if I'm going to be a Chelsea player. Willian is another Chelsea player whose future is uncertain "
] Experienced wingers Peter and Willian both have a year left over their current deals
will probably go and eat Morris Didier Drogba could play In an ideal world, Chelsea would like England's assistant Steve Holland, but he would certainly expect that he is number 1 when he takes his next step.
<img id = "i-c6fb819a40b14c54" src = "https://dailym.ai/2CYdfvj 2019/06/15/20 / 14832950-0-image-a-9_1560627438036.jpg "height =" 426 "width =" 634 "alt =" <img id = "i-c6fb819a40b14c54" src = "https: // i .dailymail.co.uk / 1s / 2019/06/15/20 / 14832950-0-image-a-9_1560627438036.jpg "height =" 426 "width =" 634 "alt =" Jody Morris (left), another former Chelsea player, will be part of the technical staff of Lampard (Left), another former Chelsea player, is part of Lampard & # 39; s coaching staff.
Jody Morris (left) is set as part of Lampard & # 39; s coaching staff
Share or comment on this article:
Source link
0 notes
simon-frey-eu · 6 years ago
Text
How switching my parents over to Linux saved me a lot of headache and support calls
During me being at my parents over the holidays (Christmas 2017) I had the usual IT-support stuff to do, that always happens to tech savvy kids when they are back at home.
As I am a happy Linux user for over a decade now, I asked myself if it would be a good idea to switch my parents away from Win 10 to a GNU/Linux (I will call it only Linux during the rest of the post. Sorry Richard ;) ) based system.
I did that and now 2 years later I still think it was a good idea: I have the peace of mind, that their data is kinda safe and they also call me less often regarding any technical issues with the system. (Yes, Win 10 confused them more than Ubuntu does).
In the following I would like to describe this ongoing journey and how you can follow my example.
The post is structured in three parts:
Preparation
Switching over
Ongoing improvements
Conclusion
Please keep in mind, that this setup is my very own solution and it is likely, that you need to tweak it to your needs. Disclaimer: I do not care about "FOSS only" or something.
Preparation
Background about my parents computer usage: They mainly use their machine for email and web stuff (shopping, social media, online banking,...) and are not heavily into hardware intense gaming or so.
As my parents already used a lot of Free Software as their daily drivers (Thunderbird, Firefox) I did not had to do a big preparation phase. But still I switch them (still on their Win 10) to LibreOffice so that they could get used to it, before changing the whole system.
That is my first big advice for your:
Try to not overwhelm them with to much new interfaces at once. Use a step by step solution.
So first of all, keep them on their current system and help them to adapt to FLOSS software that will be their main driver on the Linux later on.
So two steps for preparation here:
1) Sit down with your folks and talk trough their daily usage of their computer (Please be not so arrogant to think you already know it all)
2) Try to find software replacements for their daily drivers, that will work flawlessly later on the Linux machine. The ones I would recommend are:
Firefox as Browser (and maybe Email if they prefere webmail)
Thunderbird for Emails
GIMP for Image Editing
VLC as Media Player
LibreOffice instead of MS Office
So as you now did find out and setup replacements for the proprietary Windows software, you should give them time to adapt. I think a month would be suitable. (FYI: I got the most questions during this time, the later switch was less problematic)
Switching over
So your parents now got used to the new software and that will help you to make them adapt easier to the new system, as they now only have to adapt to the new OS interface and not additionally also to a lot new software interfaces.
Do yourself a favor and use standard Ubuntu
I know there are a ton of awesome Linux distros out there (Btw. I use Arch ;)) but my experience during this journey brought me to the conclusion, that the standard Ubuntu is still the best. It is mainly because, all the drivers work mostly out of the box and the distro does a lot automatically. (Because of that, my parents where able to install a new wireless printer without even calling me...beat that Gentoo ;))
On top of that: The Ubuntu community multilingual and open for newbies.
The journey until Ubuntu
Until Ubuntu we tried different other distros, all suffering at some point (Please bear in mind, that this are all awesome projects and for myself they would work 100%, but for no technical people as my parents a distro just needs to be real solid):
1) Chalet Os as it was promoted as most lookalike to Windows. As it is based on XFCE it is lightweight, but the icons and styles differ all over the UI. So you get confused because the settings icon always looks different, depending where in the system you are.
2) Elementary OS because I love the UI myself. No clue why, but my parents never got warm with it. It is just a bit to far away from what they are used to.
3) Solus OS has again a more windows looking ui and it worked better for my parents. But after all you have to say Solus is just not there yet. The package manager has to less packages and whenever you have a problem it is super hard to find a solution on the net. Plus: The UI crashed at least once a day. (IMO a driver problem with the machine, but still after hours of work we did not find a solution.)
4) Finally Ubuntu](https://www.ubuntu.com/) and that now works nice and smooth (For over 8 month now)
Nuke and pave
So you selected the distro and are now able to nuke and pave the machine. I think I do not have to explain in-depth how to do that, just two important things:
Backup all you parents data to an external hard drive (Copy the complete C: drive)
Write down upfront what software you want to install and make sure you also backup the configuration and data of those
**Cheating: ** If you want to amaze with the new system even more and the machine is still on a HDD, replace it with a SSD, so the Linux system feels even better and faster ;)
Configuration
After you installed the distro, do a complete configuration. (Yes, go trough every setting and tweak it if needed)
Now install the software your folks already used on their Windows machine and make sure it is configured in the exact same way as it was on the old system! (That will help a lot in keeping the moral up, because then their is already something that feels familiar to them)
I found, that it is best to place the shortcuts of the applications your parents use the most in bar on the left side on Ubuntu, so they find them easily
Sit down with your parents and ask them, what data the need from the old system and copy only that over. Hereby you clean up the file system by not copying over the old crap they did not use for ages and if they find out later, that there is more data they need it is stored on the backup drive.
Introduce them to the new system
After the configuration and setup is now complete you need to allocate some time for introducing them to the new system. You know you parents best so do it in the way the like it.
For me the following routine worked best:
0) Explain it to them in two individual sessions (as mostly one of them is more tech savvy then the other one and so both have the chance to ask you individually)
1) Shutdown the machine
2) Let him/her start the machine
3) Tell her/him to try to do their daily business and whenever questions come up explain how to solve the issue (Never touch the mouse or keyboard! If you take it over, it is very likely that you will be to fast)
4) Stop after 60 minutes and if there are still questions do another session the next day (Imagine yourself learning something completely new to you - maybe Chinese - are you able to concentrate more than an hour?)
Some topics I would recommend you to cover during the introduction:
How to setup a new wifi connection (especially if the machine is a laptop)
How to install new software
How to setup a new printer/scanner
How to print/scan
How to restore deleted files
How to get data from/to a USB-stick or mobile device
How to shutdown the machine (not that easy to find on Ubuntu)
Ongoing improvements
So normally now the system should work as intended and if you are lucky it saves you a lot of problems in the future. In this section I will give you some more recommendations, that helped to make the experience even better:
Linux does always ask you for your password if you are doing something that could deeply harm the system. So I told my parents: Whenever that dialog (I showed it to them) pops up, they should keep in mind, that they could destroy the whole machine with this operation and if they want they can call me first.
Show them the app store and tell them, whatever they install from there is save (so no viruses or something) and they can install everything they want as long it is from there. It makes fun to find new cool software and games, so help them to experience that fun too :D
Backups! As it is really easy with Linux you should do a automatic daily/hourly backup of their complete home folder. I use borg for that. (I plan to to write an in-depth blog post about borg in the future, it will be linked here if it is done). So now, whenever my parents call me and tell me that they deleted something or that the machine does not boot anymore I can relax and tell them, that we can restore all there data in a matter of minutes....you can't image how good that makes me feel.
It is not FOSS, but I did install google chrome as it was the easiest for watching netflix and listening to spotify.
I would recommend installing some privacy plugins and stuff into the browser your parents use, so you get them even saver.
If you have some software that does not have a good replacement, try to use wine for it. Worked well with MS Office 2007. (Sorry LibreOffice, but you still can't compete with MS here). PlayOnLinux did help me a lot with the wine setup
If possible activate the automatic update and installation of all security updates.
Conclusion
For me the switch made a lot of sense, as my parents are not heavy technical users of there systems. Should yours be into Photoshop, video editing or gaming I do not think it will be so easy to do the switch over, as Linux and its software is still not a good competitor in this areas.
I would love to get your feedback on this blog post: Did you switch your parents to Linux and how did that work out? Do you have other insights that should be added to this post? Hit me up via [email protected]
Thanks for reading! Simon Frey
p.S. One reason why my parents machine did not boot anymore for several times, was a plugged in usb stick and the bios tried to boot from it. So do not forget to reset the boot order to first boot of the hard drive ;)
Did you like this post?
Donate: or
Feedback: Email
RSS Feed - This work is licensed under Creative Commons Attribution 4.0 International License
Old Tux Image Source (CC BY-SA 3.0): https://easylinuxtipsproject.blogspot.com/p/mint-xfce-old.html
0 notes
wcuijobs · 7 years ago
Text
MRI Coordinator - AZ
Medical imaging plays a pivotal role in the delivery of excellent patient care at Banner Health. From detection and diagnosis to the treatment of illnesses and abnormalities, Banner Health’s varied medical imaging and radiology services help physicians establish and execute individualized treatment plans.
With the cutting-edge technologies used in MRI at Banner University Medical Center – Phoenix, we are able to provide multiple advanced services to our adult and neonatal patient population. We provide services to outpatients, inpatients, the emergency room, and intraoperatively.
At our facility you will have the opportunity to work on new state of art Siemens Aera 1.5T and Vida 3T,Assesses and educates patients according to specific department protocols.Assures the safety of the MRI environment. Protects patients and staff from potential harm by insuring all individuals entering the area of magnet field influence are screened and determining only non-ferrous objects are taken into the MRI scan room.Maintains mandated continuing education requirements.May be required to train / precept students and new techs.Positions patients and selects anatomic and technical parameters accurately.Prepares and administers contrast media as prescribed and within the accepted scope of practice.Prepares for and assists the physician in magnetic resonance imaging diagnostic and procedures
MINIMUM EXPERIENCE: Prior general radiology experience or prior MRI hospital experience.REQUIRED CERTIFICATIONS/LICENSURE: Current Certification for CPR and ARRT-MRI or ARMRIT.
As a MRI Coordinator at this facility, you will have the opportunity to join a fast paced, highly efficient team who cares for some of the most critical patients in the state. If being an integral part of a Level 1 Trauma Center, Transplant Center, and the first Comprehensive Stroke Center in the State of Arizona interests you, then this is the place for you!
Your pay and benefits (Total Rewards) are important components of your Journey at Banner Health. Banner Health offers a variety of benefit plans to help you and your family. We provide health and financial security options, so you can focus on being the best at what you do and enjoying your life.
-
About Banner - University Medical Center Phoenix
Banner - University Medical Center Phoenix is a nationally recognized academic medical center. The world-class hospital is focused on coordinated clinical care, expanded research activities and nurturing future generations of highly trained medical professionals. Our commitment to nursing excellence has enabled us to achieve Magnet™ recognition by the American Nurses Credentialing Center. The Phoenix campus, long known for excellent patient care, has over 730 licensed beds, a number of unique specialty units and is the new home for medical discoveries, thanks to our collaboration with the University of Arizona College of Medicine - Phoenix. Additionally, the campus responsibilities include fully integrated multi-specialty and sub-specialty clinics, and with a new $400 million campus investment, a new patient tower and 2 new clinic buildings will be built.
About Banner Health
Banner Health is one of the largest, nonprofit health care systems in the country and the leading nonprofit provider of hospital services in all the communities we serve. Throughout our network of hospitals, primary care health centers, research centers, labs, physician practices and more, our skilled and compassionate professionals use the latest technology to make health care easier, so life can be better. The many locations, career opportunities, and benefits offered at Banner Health help to make the Banner Journey unique and fulfilling for every employee.
-
Job Summary
This position facilitates services and provides clinical support within the department. Performs prescribed procedures as directed by following department/facility policies, procedures and protocols. Must demonstrate the knowledge and skills necessary to organize and provide care appropriate to patient population.
Essential Functions
Provides or facilitates patient care for patient populations and serves as a resource to staff for clinical support. Assumes responsibility for direct patient care when necessary. Promotes interdisciplinary patient care planning and patient education.
Demonstrates leadership qualities in support of department needs and collaborates with various departments, outside vendors, and other departments to assure adequate resources and the proper coordination of safe, efficient patient care management.
Serves as resource to patients, families, providers and staff in providing care by facilitating patient flow. Assists in the interpretation of department/facility/system policies within the clinical setting. Responsible for providing safe and cost effective care while considering patient satisfaction and customer service.
Participates in staff development, orientation, education and evaluation of clinical competencies. Mentors staff to increase clinical, critical thinking and problem solving skills. May participate in employee performance assessments.
Supports change and assists in the development, interpretation, implementation and evaluation of the process improvement and quality management activities of the department/system. May serve as QA and clinical educator. May coordinate and assist in development of QI/QC projects.
Monitors staff usage and ensures staffing meets patient needs in a fiscally responsible manner.
Accountable for the ethical, legal, and professional responsibilities related to imaging practice. This includes maintaining confidentiality of all work information. Adheres to safety policies.
Assures the efficient operation of workflow of the department. Performs prescribed procedures in accordance with established departmental/facility policies and procedures.
Minimum Qualifications
Certificate or diploma from an approved/accredited Radiologic Technology program or equivalent program for other modalities (MRI, Nuclear Medicine, CAT Scan, Mammography, Diagnostic imaging).
Requires national certification from the American Registry of Radiologic Technologists (ARRT) and/or modality qualified licensure (NMTCB, ARDMS, ARMRIT). Licensure by state regulatory agency required, if applicable. Advance certification by accrediting body in specialty required (MRI, Nuclear Medicine, CAT Scan, Mammography), if applicable. BLS certification required. Depending on certification and modality(ies) coordinating, may be assigned to a single modality or Multi-Modality Medical Imaging Coordinator role (Ultrasound Coordinator, MRI Coordinator, Mult Mod Med Img Coord, etc.)
This position requires clinical knowledge typically achieved with 3+ years of experience. Must demonstrate effective communication skills, human relations skills, analyze data and solve problems.
Preferred Qualifications
Health care related Bachelors degree and prior supervisory experience preferred.
Additional related education and/or experience preferred.
https://jobs-bannerhealth.icims.com/jobs/231483/mri-coordinator/job?mode=job&iisn=indeed3-organic&mobile=false&width=1874&height=500&bga=true&needsRedirect=false&jan1offset=-420&jun1offset=-420
0 notes
twosecondstreet · 7 years ago
Text
Marriott has a newish hotel brand called Moxy. It’s essentially a budget chain aimed at millennials, boasts fantastic internet connections, a bar with craft cocktails, and so many random decorations all over the place you’d think an Instagram feed exploded in the lobby. I’ve been lucky enough to visit the only two Japanese locations, in Honmachi in Osaka and in Kinshicho, Tokyo.
Moxy Osaka
I spent a good week here, seeing Osaka and Kyoto during my visit.
The location is decent: It’s in a business district but it’s very centrally located in Osaka. You can walk to a variety of great restaurants and craft beer bars, and the large Osaka station is only a 5-minute train ride from the local station.
The lobby is incredible: The second floor is open and it makes the space feel massive. You can see all sorts of kooky decorations and areas that were meant to be photographed and shared on Instagram or Snapchat with your friends. It has this quirky personality going for it and that carries into the rooms. You see framed photos of local destinations in black and white, hooks galore that hang an extra chair and table, bright pink soap dispensers and blow drier, subway tiling in the bathroom, and a rollicking video of hotel staff showing off the hotel on the TV.
Typical room door.
The beds are cozy enough, but like most hotels in Japan, the comforter is super heavy, so if it’s a hotter time of year, it can be a bit frustrating to deal with. A double-edged sword addition is the motion-sensitive light at the foot of the bed. If you get up in the middle of the night, a light under the foot of the bed lights up and help guides your way. It’s a cool idea, but sometimes, I found myself blinded as my wife got up in the middle of the night to use the restroom. Even the dimmest of lights can seem overwhelming when it’s pitch-black.
The bar had a nice mix of drinks for you: sochu, beer, wine, and cocktails. If you’re a Silver member in the Marriott Rewards system, you get a free drink for each guest, which is super nice. They also have a super good deal for happy hour: Two for one drinks, which I have not heard of at any bar here in Japan. We each tried a few, and for the most part, they pack a nice punch. Be wary of the Silent Killer. If lime, Pepsi, and Kahlua sounds like an unpleasant combination, that’s because it is. I had to sit through two of them as well, much to my displeasure. All the other drinks are great, though! Sit on the 1.5 floor counter and watch the intrigued citizens of Osaka walk by and peek at the menu posted outside. Even if you don’t stay here, I recommend heading over for the happy hour!
The internet here was just as amazing as they said it would be. I never had a connection problem and it seemed like I got full strength no matter where I was. We didn’t try any of the hotel’s food while there as we opted for super cheap eats most of our meals there. They do have a ramen bar where you can get your noodles and pick your toppings. The lunch specials are also a great bargain: from 600-900 yen for a set. Their signature flatbreads looked delicious as well!
Overall, I really enjoyed staying here. It was quiet, comfortable, and a good price (if you book in advance).
Moxy Tokyo
So, I only got to stay here one night, full disclosure. It wasn’t a full week of exploration and investigation like I was able to do at the Osaka branch. This will change my experience and my perceptions of the hotel. I’m only human.
A few things you should know immediately.
Kinshicho, the neighborhood hosting Moxy Tokyo, has a bad reputation among the Japanese. Why? Foreigners. The area has a very noticeable amount of European hostess clubs, as well as businesses with aggressive barkers who are very not Japanese in appearance or demeanor. There seems to be a lot of sex-based businesses going on here, which doesn’t project the most positive of images in most cultures. The Moxy is right across the street from two such clubs and just up the street from a very flamboyant love hotel (Hotel Sara, for the curious). If this bothers you, you might want to stay elsewhere.
That being said, Kinshicho is a really fantastic location in Tokyo. You’re minutes away from Akihabara, Asakusa, and Tokyo Skytree. In addition, because of the strong foreign influence, you get a lot of good international food. The last remaining Romanian restaurant in the country is here, along with Indian and Russian cuisine. I can’t emphasize enough about the Romanian restaurant; the owner seems pessimistic about the future, but his food is amazing! Try and get it if you can while he’s open.
Smiling right now. Thanks for the reminder, pillow!
You have to step up in the Tokyo showers!
As for the hotel itself, it’s much smaller. So much smaller. The lobby is small, but they do a good job with the space to make it feel more open. The rooms themselves are also smaller but I chalk that up to the insane price per square meter Tokyo wrings out of its land. Can’t blame them too much for wanting to be economical. The window in our room was better here, but everything else was pretty much the same. I had a lot of problems connecting to the internet at this location as well. So much for that blazing-fast internet.
The bar was unchanged. Still didn’t try any of the food, sadly.
They also employed more foreign staff at this location, which was nice to see. The English ability was much better, overall, if that’s something you need. The decor was a bit whackier at this location. I got a chuckle from the lobby bathroom signs, but the stuffed animal wall and a few other details didn’t do it for me. I did really like the old CRTVs with the classic Famicoms set up on them. Does that make this lobby bar technically a video game bar? They did have a reasonable selection of classic games to play.
The hotel did have a nice feel, but I think I had such a nice and relaxing time at the Osaka branch that the experiences don’t line up. The Tokyo branch is fine, but I feel the objectively nicer of the two is in Osaka. It’s bigger, the rooms are more spacious, the lobby is better decorated, and the location feels a bit nicer.
And there you have it! Feel like visiting a Moxy? Have you been to one before? Let me know which one and what you thought of it in the comments below. I’d love to hear your impressions of the new chain.
Moxy Hotels: Osaka and Tokyo Marriott has a newish hotel brand called Moxy. It's essentially a budget chain aimed at millennials, boasts fantastic internet connections, a bar with craft cocktails, and so many random decorations all over the place you'd think an Instagram feed exploded in the lobby.
0 notes