huhnn-blog
huhnn-blog
alkesnes
128 posts
Killes
Don't wanna be here? Send us removal request.
huhnn-blog ¡ 8 years ago
Text
Vollwertig sollte es sein
Vollwertig sollte es sein
Geben Sie Vollkornprodukten den Vorzug, denn sie sind reich an Ballaststoffen. Tauschen Sie Weißmehlprodukte wie helle Brötchen, Toast, helle Nudeln, geschälten Reis, Misch- und Weißbrot gegen Vollkornprodukte und Müsli aus.
Aber Vorsicht: Vollkorn kombiniert mit Zucker – das verträgt nicht jeder, und häufig sind schmerzhafte Blähungen die Folge.
So natĂźrlich wie mĂśglich
Das bedeutet, die Nahrungsmittel mÜglichst in ihrer natßrlichen Form verzehren bzw. mÜglichst wenig be- und verarbeiten, denn dann ist gewährleistet, dass die natßrlichen Inhaltstoffe (z.B. Ballaststoffe, Vitamine) weitgehend erhalten sind.
Beispiel: Pellkartoffeln statt Kartoffelpuffer, Rohkost oder Frischobst essen statt Saft trinken.
Fettarm, aber nicht fettfrei
Viele sparen mit dem Fett am falschen Ende und lassen das wertvolle Öl am Salat weg. Bei Wurst, Chips und vor allem Käse wird dann aber nicht so genau hingeschaut.
Richtig: Oliven��l und Sonnenblumenöl für Salat und Gemüse verwenden und fette tierische Lebensmittel (Wurst, Käse etc.) gegen fettarme austauschen. Ganz ohne Fett kann der Darm nicht arbeiten – die Devise heißt aber: Fett in Maßen und das richtige (pflanzliche) Fett!
Mehr pflanzliche Kost
Basis unserer Ernährung sollten pflanzliche Lebensmittel wie Getreideprodukte, Kartoffeln, Hßlsenfrßchte, Obst, Gemßse und Salat sein. Hiervon sollten wir uns satt essen, dann nehmen wir nicht nur die Hauptnährstoffe in einem ausgewogenen Verhältnis sondern auch ausreichend Ballaststoffe auf.
Fleisch, Wurst und Co. enthalten zwar wertvolles tierisches Eiweiß, aber auch meist viel Fett. Ballaststoffe sind keine enthalten. Wer sich gesund ernähren möchte und vor allem, wer unter Verstopfung leidet, sollte diese Lebensmittel möglichst nur gelegentlich und in kleiner Menge verzehren.
Reichlich FlĂźssigkeit
Häufig ist mangelnde Flßssigkeitszufuhr AuslÜser einer Verstopfung. Besonders der Darm braucht ausreichend Flßssigkeit fßr die Verdauungs- und Ausscheidetätigkeit! Täglich 2 Liter Flßssigkeit wie Mineralwasser, Kräuter- und Frßchtetee, Saftschorle sollten es schon sein.
Bitte beachten: Mit coffeinhaltigen Getränken wie Kaffee oder Schwarztee und Alkohol (auch Bier!) lässt sich der Flßssigkeitsbedarf des KÜrpers aufgrund der harntreibenden Eigenschaften nicht decken.
Kräuter, Gewßrze und Co.
Seien Sie sparsam mit Salz. Nehmen Sie stattdessen reichlich frische Kräuter, denn sie enthalten Inhaltsstoffe, die die Verdauung fÜrdern. Auch Gewßrze wie z.B. Koriander, Anis, Fenchel oder Kßmmel regen die Verdauung an.
Milchsaures GemĂźse
Unsere Großmütter wussten schon, dass milchsaures Gemüse wie z.B. Sauerkraut, Bohnen und Zwiebel sehr wertvolle Nahrungsmittel sind. Sie sollten jedoch nicht erhitzt werden, damit die Milchsäurebakterien, die so wertvoll für die Darmflora und die Verdauung sind, nicht absterben.
Sauermilchprodukte
Molke, Kefir, Buttermilch und Naturjoghurt sind leichtbekĂśmmliche und verdauungsfĂśrdernde Nahrungsmittel.
Esskultur
Nicht nur was und wie viel wir essen, hat einen Einfluss auf unsere Verdauung, sondern auch wie wir essen.
Folgende Empfehlungen helfen, neben einer vollwertigen und gesunden Ernährung, der Verdauung wieder auf die Sprßnge:
Denken Sie daran: Die Verdauung beginnt im Mund! Jeden Bissen grĂźndlich kauen.
Auf das Essen konzentrieren – nicht ablenken durch Lesen oder Fernsehschauen.
Genßgend Zeit zum Essen nehmen, nicht in Hektik und Eile essen. Wenn keine Zeit zum Essen besteht, besser zu einem späteren Zeitpunkt essen.
Ruhepausen gĂśnnen.
Die Speisen appetitlich anrichten, den Tisch liebevoll decken, denn das Auge isst mit.
5 kleine Mahlzeiten werden meist besser vertragen als 3 große.
1 note ¡ View note
huhnn-blog ¡ 8 years ago
Photo
Tumblr media
Zack Merrick of All Time Low on Flickr
754 notes ¡ View notes
huhnn-blog ¡ 8 years ago
Photo
Tumblr media
Mario koala sculpture created to raise awareness for the Currumbin Wildlife Hospital Foundation in Australia.
104K notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
Intel® RealSense™ Technology: Bringing Human-like Senses to the Devices We Use Today and Tomorrow
By Sanjay Vora on January 6, 2016
Tumblr media
Intel® RealSense™ Camera R200 debuts
The Intel RealSense Camera R200, integrated into 6th Generation Intel® Core™ platforms, is a camera module integrated into the back of 2 in 1 detachable PCs and tablets allowing people to experience:
3D scanning: Scan people and objects in 3D to share on social media or print on a 3D printer.
Immersive Gaming: Scan oneself into a game and be the character in top rated games like Grand Theft Auto*, Skyrim*, Fallout 4* and ARMA3*.
Enhanced Photography/Video: Create live video with depth enabled special effects, remove/change backgrounds or enhance the focus and color of photographs on the fly.
Immersive Shopping: Capturing body shape and measurements as depth data that is transformed into a digital model enabling people to virtually try on clothes.
How does it do all of this? The Intel RealSense Camera R200 is capable of capturing VGA-resolution depth information at 60 frames per second. The camera uses dual-infrared imagers to calculate depth using stereoscopic techniques, similar to how the human eyes sense depth perception. By leveraging infrared technology, the camera provides reliable depth information even in darker areas and shadows as well as when capturing flat or texture-less surfaces. The operating range for the Intel RealSense Camera R200 is between 0.5 meters and 3.5 meters, in indoor situations. The RGB sensor is 1080p resolution at 30 frames per second.
The benefits and capabilities of the Intel RealSense Cameras are generally experienced through applications that leverage the depth imagery for a variety of usages. To help accelerate and support app development, theIntel RealSense Software Developer Kit (RSDK) makes APIs available for a variety of key end-user applications and we are working with a significant number of ecosystem players to deliver experiences like 3D scanning, depth-enabled video and photography, immersive shopping and gaming.
An example of the type of new usages that will be possible is an immersive shopping experience from Intel and Zappos that was unveiled for the first time at CES 2016. Initially a beta release, Intel and Zappos will host pop-up shops and events where invited beta testers will be able to capture their body shape and measurements using a device with a world facing Intel RealSense Camera R200 that is transformed into an Intel RealSense Model. For a limited set of denim products, beta testers can virtually try on multiple sizes and styles on their model and understand how it would fit them.
It is still early days for this technology and these experiences, but we’re excited about what’s possible.
A number of OEM systems featuring the Intel RealSense Camera R200 are currently available, or will be soon, including the HP* Spectre x2, Lenovo* Ideapad Miix 700, Acer* Aspire Switch 12 S, NEC* LaVie Hybrid Zero11 and Panasonic. The Intel RealSense Camera R200 is supported on all Windows 10 operating systems that run on 6th Generation Intel® Core™ Processors.
Intel® RealSense™ Camera ZR300 for developers
Joining the family of cameras is the Intel® RealSense™ Camera ZR300, being featured as an integrated unit within the newly announced Intel® RealSense™ Smartphone Developer Kit (coming soon). A cutting-edge vision and sensing platform, this developer kit and accompanying device runs on the Intel® Atom™ x7-Z8700 mobile SoC (System on a Chip), and will be offered to Android developers to help bring about the next generation of human-like senses to smartphone and phablet usages.
At the heart of the smartphone development platform is the new Intel RealSense Camera ZR300.  This array of six sensors includes the new Intel RealSense Camera R200, a high-precision accelerometer and gyroscope combo, a wide field-of-view camera for motion and feature tracking, and rounding out the six camera sensors on the device are an 8MP rear RGB camera and a 2MP front-facing RGB camera. The Intel RealSense Camera ZR300 provides high-quality and high-density depth data at VGA-resolution of 60 frames per second.
The Intel RealSense Camera ZR300 supports Google Project Tango specifications for robust feature tracking and synchronization via time stamping between sensors. Combined with its low power consumption the camera can provide versatile experiences, such as indoor mapping and navigation, as well as area-learning capabilities that naturally complement applications in 3D scanning as well as immersive virtual and augmented reality usages on top of drones and robotics applications.
The Intel RealSense Smartphone Developer Kit runs on Android OS, and implements Google’s Project Tango Product Development Kit (PDK) and the Intel RealSense SDK add-on for Android. Additional technologies include Bluetooth 4.0, NFC, GPS, 802.11 WIFI, and 3G-connectivity. At the moment, the Intel RealSense Camera ZR300 will only be available as an integrated module inside the Intel RealSense Smartphone Developer Kit, whose pre-order site goes live at CES 2016.
Intel® RealSense™ Camera F200 Gains Momentum
Momentum continues to grow with Intel® Core™ based notebooks and all-in-one (AiO) systems that leverage the Intel® RealSense™ Camera F200 for a variety of short range, user facing usages and applications.
The Intel RealSense Camera F200 is capable of detecting the people and objects in the foreground and separate them from the background. This background segmentation can enable more immersive collaboration by letting people selectively remove and replace their background or create and share content while video chatting on services like ooVoo and Tencent QQ. But it’s more than just collaboration. RealSense can enable a virtual green screen for gamers, allowing players to stream their live gameplay on popular broadcasting platforms such as Twitch, Xsplit and OBS. Smack talk can also go to another level when players and their opponents can see each other’s reaction on a video chat projected in the games border, removing all other background using an app like Personify. Of course, the Intel RealSense Camera F200 also enables gesture control so that games can be played in wholly different ways as on independent titles Laserlife and Nevermind and with the Intel® Block Maker people can scan objects and then import those 3D objects into their Minecraft game’s world.
50 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
Fujitsu & Intel Collaborate on a New Era of IoT Platform Solutions
By Doug Davis on May 28, 2015
Tumblr media
Few things are as undeniable as the inherent forward momentum of human progress that is made possible by the Internet of Things (IoT). In Japan recently, we saw once again how the power of collaboration is leading to a new era of IoT that is being fueled by the extraordinary innovation of Intel IoT ecosystem collaborators like Fujitsu.
As announced during my keynote at Fujitsu Forum, Fujitsu and Intel are combining Fujitsu Laboratories’ robust distributed service platform technology with the Intel IoT Gateway to build comprehensive IoT solutions. By combining Fujitsu’s cutting-edge technology with the Intel IoT Gateway, an integral part of the end-to-end Intel IoT Platform, both companies are creating an optimal systems environment, thereby offering high-value IoT solutions.
Inside a Distributed IoT Service Platform
Fujitsu’s distributed service platform technology allocates service functions across an entire network, including both the network center and remote sites, enabling unified management. A cloud-based centralized-management mechanism optimally distributes data processing in response to monitored information received from each Intel gateway. This optimized processing, distributed through gateways, happens automatically and without human intervention, in response to service requirements.
To make the most of the distributed service platform using this technology, some processing needs to be handled at the gateways, rather than concentrating all of it at the center. The Intel IoT Gateway uses a tested combination of Intel processors and Wind River and McAfee software, with outstanding processing capacity, security, and certainty. Combined with Fujitsu’s distributed service platform technology, the result is a scalable system environment that can be quickly built and revised to respond to real-time changes in data volumes, thereby optimizing overall system costs.
Field Testing IoT at Shimane Fujitsu
As the first step in our collaboration, Intel and Fujitsu have initiated a field test at Shimane Fujitsu with the aim of expanding the scope of visibility into the manufacturing facility’s operations and reducing indirect overhead costs through data collection and analysis.
Shimane Fujitsu will collect post-shipment sensor data from products using FUJITSU IoT Solution UBIQUITOUSWARE, and conduct correlation analyses of the data with logs from production processes to further reduce costs. Initially, this will be used to bring a greater degree of visualization to the repair process of defective items undergoing reworking.
As part of this field study, locations of products needing repair will be tracked, as well as their progress through the repair process, including wait times, in real time, which should help reduce the number of extra steps required until shipment. In the future, this information can be correlated with video analyses of operators and equipment in trial production processes to further improve the reject rates of finished goods and reduce indirect costs. Our goal is to expand the scope of visualization to cover the entire supply chain between different plants.
Tumblr media
The Future of the Global IoT
Fujitsu and Intel plan to take what we learn at Shimane Fujitsu and extend that to Fujitsu Group locations worldwide, and to roll out a series of IoT solutions for manufacturing. This is yet another superb example of what is possible through collaboration with our innovative ecosystem collaborators.
73 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
Building the Next-Generation Car with Intel IoT
By Sam Lamagna on November 23, 2015
Consider for a moment what the creation of a truly connected and intelligent vehicle could mean. First and foremost, increased safety. Also, improved fuel efficiency. Less congestion. Decreased pollution.
Even better? A driving experience enhanced with Intel Internet of Things (IoT) that can help make your commute safer, more productive, and enjoyable.
With the rapid convergence of commute and compute, the automotive industry is undergoing a radical transition not seen since the creation of the assembly line. And that’s presenting automotive and IT professionals with a unique opportunity: to take a fundamentally different approach to vehicle function, design, and construction.
Today, our cars are transportation machines that contain computers. Tomorrow, they’ll be intelligent data devices that actively help us take advantage of every moment on the road. Even the materials we use to build them may change; we could have plastic vehicles, or even biodegradable ones.
Sound too sci-fi? It’s not. Look at what we’re already doing with advanced driver assistance systems (ADAS). Twenty years ago, adaptive cruise control, automatic parking and braking, blind spot detection, and collision avoidance—and self-driving cars—were just engineers’ dreams.
Navigating a connected environment with V2x
As automakers race to build the ultimate ADAS, two camps are emerging.
The first is focusing on vehicle message passing, in which the car communicates with:
Other vehicles (V2V)
Infrastructure points, like signs and traffic lights (V2I)
Bicyclists and pedestrians, via smart-phone apps and wearables (V2P)
The cloud (V2C)
Combined, we call this “V2x.”
The main challenge with V2x is what to do with the incoming data. Many proponents feel that the best approach is to compile and present it to the driver to interpret and act upon. But then there’s the danger of information overload. We know from managing cybersecurity breaches that it’s risky to treat all incoming threats equally. They need to be prioritized, so that people are alerted only to the most pressing concerns.
Plus, much industry work is required to ensure that V2x communication protocols are standardized. To get a serious systemic payoff from the technology, all vehicles must speak a common language. Disparate formats will both diffuse the safety benefits and confuse consumers.
Finally, V2x by itself is dependent on robust network connectivity. Not all wireless communication protocols are created equally in ensuring secured, deterministic message passing. And not all can achieve global ubiquity, efficiency, and cost-effective economies of scale.
Putting the driver in the passenger seat with onboard intelligence
The alternate camp says the way to increase driving safety, enjoyment, and productivity is to remove critical tasks from the driver and assign them to an onboard computer. To, essentially, create an autonomous car.
That means the individual vehicle reads and reacts to its surroundings, so it’s less dependent on connectivity and incoming messages.
Tumblr media
There are challenges to address here, as well.
Security is as much a concern with autonomous cars as it is with V2x. You don’t want your connectivity or onboard systems vulnerable to hacking. And the profusion of discrete electronic control modules (ECMs) aboard an intelligent car presents a tempting array of attack vectors. Particularly since many are sourced from third-party suppliers and may go unchanged for years.
Then there’s the challenge of getting public buy-in. In the United States in particular, we have a driving culture that values individual driver control. If you ask your average driver if they want a computer to take over in an emergency, many will say no.
Joining the best of both worlds
So, my question is, why do we have to choose one or the other of those approaches? Why not blend the two to create an incredibly intelligent vehicle with an ADAS that can interpret and respond to its surroundings, and augment it with connectivity to cameras, LiDAR (Light Detection and Ranging), and radar? A vehicle that complements and enhances human cognitive abilities, rather than replacing them?
Reimagining the car, from the wheels up
To build this next-generation intelligent car, we’ll have to enable sensor output fusion and shift to a centralized computer—one that can mimic a human brain. And the vehicle will need 360-degree awareness.
This will require an unprecedented amount of compute horsepower in a small thermal envelope, scalable from vehicle to vehicle and generation to generation. It will also require the greatest ratio of performance per watt per cost per functional safety. Plus—and this is critical—the system has to fail safely and fail operationally.
So, as you can see, all of this calls for a major reimagining of car architecture. As well as unprecedented levels of collaboration between industries, government agencies, consumers, and special interest groups.
Developing the car as a platform for innovation
That’s why my team at Intel is made up of a combination of forward thinkers from the automotive industry, industrial control, functional safety, and overall platform design. As well as hardcore IT silicon specialists who have dedicated their careers to building mission-critical servers.
Together, we’re looking at the car as a holistic platform. We’re exploring the best and most efficient way to capture data, then securely move it from the edge to the central brain in real time.
No more siloed systems. No more just bolting on another box when you want to add an ADAS feature. We’re building a whole new backbone. One with a brain that assists as needed and controls when control is handed over to it. Essentially, we’re creating an entire car that’s an ADAS.
To do that, we’re borrowing best practices from the avionics industry. Think about how commercial aviation has changed. And how it hasn’t.
We now have onboard computers, autopilot, and collision avoidance systems, and they all make air travel far safer and more reliable. But we haven’t made pilots obsolete; they’re still the apex decision makers. And that’s how we think the next-generation intelligent car should operate.
Driving the evolution of automotive with Intel IoT
Tumblr media
For more than a century, motor vehicle research and development has concentrated on engineering mechanical systems. Automatic transmissions. Air conditioning. More efficient engines.
But that emphasis is shifting as the worlds of automotive manufacturing and computing converge. In the future, the majority of vehicle research will focus on software, not hardware.
Intel IoT is uniquely situated to play a central role in developing a connected car; one that not only leverages more powerful and efficient computing, but can also help to enhance safety and security. We’ve been involved in mission-critical systems for decades. And we want to catalyze the development of best practices, standards, and platforms that will be used across the automotive industry.
That’s why we created the Automotive Security Review Board (ASRB) to convene top thinkers from across the technology and automotive sectors. The ASRB is exploring how drivers use cars, what they value, and how we can expand the user experience. And, above all, how to increase safety and reduce accidents.
Intel is also inviting industry experts to comment on our new white paper, Automotive Security Best Practices: Recommendations for Security and Privacy in the Era of the Next-Generation Car. We’ll publish revisions based upon feedback and findings from the ASRB.
111 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
Intel SGX: Debug, Production, Pre-release what's the difference?
Submitted by Simon Johnson (Intel), Dan Zimmerman (Intel), DEREK B. (Intel) on January 7, 2016
Since release the SDK we’ve had a few questions about debug vs pre-release vs release mode (production) enclaves.
Part of the security model of Software Guard Extensions is to prevent software from peaking inside and getting at secrets inside the enclave… but no-one writes perfect code the first time round; so how do you debug an enclave?
SGX HW Debug Architecture
The SGX architecture supports two modes for Enclaves a Debug mode and Production (non-debug) mode. Production Mode enclaves have the full protection provided by the architecture. In the HW architecture debug mode enclaves differ from production enclaves in 4 basic ways.
Debug Enclaves are created with the ATTRIBUTES.DEBUG bit set. This field appears in the output of the EREPORT instruction REPORT.ATTRIBUTES (see Enclave Data Structures chapter in the Intel x86 Software Developers Manual). The debug bit is not measured as part of the build process so Debug and Production enclaves can have the same measurement.
Keys returned by the EGETKEY instruction leaf in debug enclaves are different for the same enclave in production mode.
Debug enclaves can be introspected by an enclave aware debugger (using the SGX debug instructions) – a normal debugger cannot introspect a debug enclave.
Performance counters are enabled inside debug enclaves.
The SGX SDK includes the Intel® SGX debugger as a Microsoft Visual Studio plugin. See the Enclave Debugger section of the Intel® Software Guard Extensions Evaluation SDK User’s Guide for additional details.
SGX SDK Compilation Profiles
Traditionally a developer would have two basic profiles for compiling his code:
Debug: compiler optimizations are disabled, debug symbols are kept, suitable for source level debugging (typical for any SW development, standard terminology of common IDEs), plus the enclave will be launched in enclave-debug mode.
Release: compiler optimizations are enabled, no debug symbols are kept, suitable for production build, for performance testing and final product release (typical for any SW development, standard terminology of common IDEs), plus the enclave will be launched in enclave-production (non-debug) mode.
In addition we have added two more profiles to the support offered in the SGX SDK:
Pre-release: same as Release with regard to optimization and debug symbol support, but the enclave will be launched in enclave-debug mode, suitable for performance testing.
Simulation: builds the SGX application linked with the “simulation” libraries, not a real enclave, this allows the enclave to be run on any non-SGX enabled Intel platform.
Currently the evaluation SDK allows the developer to create and run enclaves using the Debug and Pre-release profiles. Enclaves compiled under the Release profile will not work until the developer completes the production licensing process. If you would like to deliver a production-quality application using SGX, please contact the SGX Program for more information about a production license.
58 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
Sensing the Simple Way to Better Healthcare
Posted by WayneWu Dec 17, 2015
Technology can solve complex healthcare problems, whether that be analysing large volumes of genomic data or allowing a specialist to see a 3D image of a beating heart, but sometimes we often overlook the simple, day-to-day tasks where technology is having meaningful impact for patients and healthcare professionals today.
Providing Efficiency in Clinician’s Workflow
I’m seeing a lot of interest and excitement here in China around the Internet of Things in healthcare. Sensors are increasingly being used to not only provide more efficiency in a clinician’s workflow in a hospital setting but also to help those patients who require care in the home to live more independent lives.
A great example in development that I’d like to share is the Intel Edison-based uSleepCare intelligent bed which is able to record a patient’s vital signs such as rate and depth of breathing, heart-rate and HRV without the need for nurse intervention. Movement sensors also help to identify where there may be cause for concern over pressure ulcers or patients may fall out of their bed for example, which may prolong a hospital stay.
Early Identification of Abnormalities
The sensors not only collect data but also use WiFi to transmit that data seamlessly to a cloud platform for analysis which can then be used in a variety of meaningful ways. The most obvious and pressing data use demand is for early identification of abnormalities which can alert nursing staff to the need for human intervention, thus reducing the requirements to have nurses ‘doing the rounds’ which is resource-intensive and costly for providers.
Additionally the archive of data helps clinicians tackle chronic diseases at the patient level, spotting trends where patients may having a worsening or improving condition. This is particularly valuable as devices such as the UsleepCare intelligent bed become available in a homecare setting. Imagine a community nurse being able to prioritise visits to those patients who are showing abnormal signs as recorded by IoT sensors via alerts, all on a mobile device in real-time. This is truly mobile healthcare, delivering the right care where it is needed and when it is needed, with the right information at their fingertips.
Data Collection Bring Efficiencies
And as this sensor technology becomes more prevalent in both the hospital and homecare setting, the data becomes increasingly useful at a population level too. It will assist providers in spotting trends which will in turn help them to become more efficient and allocate resources where appropriate.
All of which ultimately benefits the patient, particularly those with chronic conditions. They will perhaps spend less time in hospital with an improved level of care and be able to spend more time at home, with the confidence that their condition is being monitored by a healthcare professional 24/7.
The Internet of Things is having a rapidly transformative effect on healthcare. Investment by providers in sensor technology such as the Intel Edison-based USleepCare intelligent bed is helping to drive efficiency-savings while also having a meaningful impact on patient care. In China we’re already pushing forward with implementation in this area and I look forward to sharing the results in the future.
57 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
Developing New Standards in Clinical Care through Precision Medicine
Posted by LesterRussell Sep 1, 2015
More specifically we talked through trends impacting healthcare and population health, what’s driving innovation to enable the convergence of precision medicine and population health and how we at Intel are working with Oracle on a shared vision.
Delivering Precision Medicine to Tackle Chronic Conditions
I’d like to underline all of what we discuss in precision medicine by reinforcing what I’ve said in a previous blog, that as somebody who spends a portion of my time each week working in a GP surgery, it’s essential that I am able to utilise some of the fantastic research outcomes to help deliver better healthcare to my patients. And for me, that means focusing in on the chronic conditions, such as diabetes, which are a drain on current healthcare resources.
The link between obesity and diabetes is well-known but it’s only when we see that 1/3rd of the global population are obese and every 30 seconds a leg is lost to diabetes somewhere in the world can we start to grasp the scale of the problem. The data we have available around diabetes in the UK highlights the scale succinctly:
1 in 7 hospital beds are taken up by diabetics
3.9m Britons have diabetes (majority Type 2, linked to obesity)
2.5m thought to have diabetes but not yet diagnosed
To combat the rise of diabetes there is some ÂŁ14bn spent by the NHS each year treating the condition, including ÂŁ869m spend by family doctors. What role can precision medicine play in creating a new standard of clinical care to help meet the challenges presented by chronic conditions such as diabetes?
Changing Care to Reduce Costs and Improve Outcomes
I see three changing narratives around care, all driven by technology. First, ‘Care Networking’ will see a move from individuals working in silos to a team-based approach across both organisations and IT systems. Second, ‘Care Anywhere’ means a move to more mobile, home-based and community care away from the hospital setting. And third, ‘Care Customization’ brings a shift from population-based to person-based treatment. Combine those three elements and I believe we have a real chance at tackling those chronic conditions and consequently reducing healthcare costs and improving healthcare outcomes.
How do we achieve better care at lower costs though from a technology point of view? This is where Intel and Oracle,with industry and customers, are working together to make this possible by overcoming the challenges of storing and analysing scattered structured and unstructured data, moving irreproducible manual analysis processes to reproducible analysis and unlocking performance bottlenecks through scalable, secure enterprise-grade, mission-critical infrastructure.
Convergence of Precision Medicine and Population Health
Currently we have two separate themes of Precision Medicine and Population Health around healthcare delivery. On the one hand Population Health is concerned with operational issues, cutting costs and resource allocation around chronic diseases, while Precision Medicine still very much operates in silos and is research-oriented with isolated decision-making. Both Intel and Oracle are focused on bringing together Precision Medicine and Population Health to provide a more integrated view of all healthcare related data, simplify patient stratification across care settings and deliver faster and deeper visibility into operational financial drivers.
Shared Vision of All-in-One Day Genome Analysis by 2020
We have a shared vision to deliver All-in-One Day primary genome analysis for individuals by 2020 which can potentially help clinicians deliver a targeted treatment plan. Today, we’re not quite at the point where I can utilize the shared learning and applied knowledge of precision medicine to help me coordinate care and engage my patients, but I do know that our technology is helping to speed up the convergence between healthcare and life sciences to help reduce costs and deliver better care.
47 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
Intel IoT Platform Paving the Road to the Car of the Future
By Prakash Kartha on January 11, 2016
Tumblr media
Telematics offerings expand daily, both in number and capabilities. Driven by consumer demand for vehicles that extend the connected lifestyle, provide enhanced safety, and reduce environmental impact, the market for in-vehicle telematics is surging.
The automotive industry is now the third-largest group filing technology patents. And connected cars are the third-fastest-growing technological device, after smart phones and tablets, according to this Forbes article.
So where are the greatest opportunities for innovation in the telematics field? How can you—whether you’re an equipment manufacturer, a mobile network operator, a developer, a wireless infrastructure provider, a system integrator, or a service provider—help create exciting new capabilities in tomorrow’s car?
In-vehicle Commerce With 4G LTE
The continued evolution and expansion of 4G LTE networks presents exciting new possibilities for in-vehicle commerce.
Until recently, subscription services, such as those for satellite radio, roadside assistance, and emergency services, have been the only way for OEMs and wireless providers to benefit from vehicle connectivity.
However, with wireless networks increasing in speed, bandwidth, and range, in-vehicle computing is growing more robust. And as cloud-based analytics becomes more common and affordable, the opportunities for truly “mobile” commerce will expand exponentially.
We’re already seeing an early iteration: GM, the first auto manufacturer to offer in-vehicle services, is once again at the forefront of consumer telematics. AtYourService, a new OnStar feature, connects drivers with offers from retailers, merchants, and hotels, based on location.
And in a clear sign that the automotive ecosystem is expanding to include a wide variety of industries, Visa, too, is getting in on the connected car market by essentially transforming vehicles into mobile credit cards. Payment credentials are loaded into the car and made available for secure, frictionless payments to other devices.
The credit card giant is currently trialing in-vehicle purchasing to enable users to pay for gas, tolls, and parking, and even order and purchase fast food, using hands-free interactive voice control (IVR).
And that’s just the beginning. Consumers will soon be able to spontaneously purchase apps, music, audio books, and other goods and services, making in-vehicle commerce as easy and convenient as shopping online.
For example, you could create an app that enables drivers to hear an advertisement for a concert and purchase tickets with a single action or voice command. Over time, the system could become increasingly intelligent, learning the driver’s patterns, behaviors, and preferences, and start making suggestions for purchases.
Sound far-fetched? It’s not. We can make it happen.
4G LTE is also enabling security systems that use multifactor authentication to prevent both car and identity theft. Systems like Mobii, a car and smart-phone app that combines interior cameras with car sensor data for enhanced security, are being developed by Intel and Ford Motor Company.
With Mobii, we’re working to create facial-recognition software that can be used not only to authenticate drivers, but to modify vehicle capabilities based on who is driving. Mobii will also use gesture recognition and voice recognition to operate various vehicle systems, to reduce driver distraction, and to improve the driving experience.
43 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
Successful Mobile Analytics Solutions Demand A Mobile Mindset
When we design for mobile analytics solutions, we are not just building a report or a dashboard. We’re designing to deliver a superior mobile user experience each and every time.
This means we need to consider all facets of user interactions and take a holistic approach when dealing with all aspects of the “mobile analytics user life cycle.” This life cycle starts before installation and does not end after the mobile analytics asset is downloaded and consumed.
In this new Intel Tech Innovation series, I will be covering many aspects of the mobile analytics design from installation to user interface (UI), from functionality to performance. I consider these best practices to be a framework for innovation applicable to all mobile analytics solutions regardless of your organization’s size, industry, or business model.
As anyone who designs professionally can tell you (whether interior designer, technology consultant, or product specialist), design is both art and science. Building the end-product is the science part. How we go about it? That’s completely art. It requires both ingenuity and critical thinking, since not all components of design come with hard-and-fast rules that we can rely on.
You can’t design for mobile like you do for a PC
In order to design an effective solution, we need to first get a sense of what mobile analytics represents. Although your perception of mobile analytics will be influenced primarily by your understanding of enterprise analytics or business intelligence, there are fundamental differences between mobile design and traditional, PC-based analytics design. Some of these changes may be obvious and others are more subtle.
Understanding these distinctions is critical and must come before we write a single line of code. So I’ll explore some of these contrasts in this series, and include examples to illustrate different design concepts for the benefit of both the layman and tech savvy.
Great mobile design requires the right mindset
As mobile architects and designers, we must embrace a certain mobile design philosophy that will be unique to each of us and the environments we work in. This is important because that philosophy will be our guiding light when best practices alone may not be enough to help us navigate in uncharted waters.
To me, mobile design requires a mindset that embodies human ingenuity in order to generate excitement for new ideas that can lead to mobile solutions to address unmet needs. Therefore, I always argue that mobile analytics must draw its strength from a desire to go beyond the basic capabilities of a mobile device to deliver actionable insight for faster, better-informed decisions.
80 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
How to effectively build a hybrid SaaS API management strategy
By Andy Thurai on February 3, 2014
In that post, we discussed the different API deployment models as well as the need to understand the components of API management, your target audience and your overall corporate IT strategy. There was a tremendous readership and positive comments on the article. (Thanks for that!). But, there seem to be a little confusion about one particular deployment model we discussed – the Hybrid (SaaS) model. We heard from a number of people asking for more clarity on this model. So here it is.
Meet Hybrid SaaS
A good definition of Hybrid SaaS would be “Deploy the software, as a SaaS service and/or as on-premises solution, make those instances co-exist, securely communicate between each other, and be a seamless extension of each other.”
Large enterprises are grappling with multitudes of issues when they try to move from a primarily corporate datacenter to an all-out-in-cloud approach. Not only that is not feasible, but also it will result in wasting millions of dollars in sunk costs invested in their current datacenter.
The current NSA actions have muddied up the public cloud safety, further undermining enterprise control over applications and data in the cloud. Yet, the pressure to have a mobile first, a cloud first or some API-centric model means enterprises must move some operations to the cloud.
So enterprises are trying a hybrid model to entertain the seemingly contradictory need for agility and security. In doing so, most organizations are building two different flavors of the same services leading to different silos. Obviously the cloud version is more geared towards fast, easily provisioned, low cost and the self-owned data center version would be geared more towards complete integration with existing eco-system. Often, this leads to two different silos.
Most software versions today don’t support Hybrid SaaS because they are not designed to operate both as a service and/or as an in-house install. A true Hybrid SaaS model allows you to install components that operate in both places with similar (if not the same) functions. In addition, there will also be a connector that allows the continuous integration between the components to make this seamless.
Some savvy organizations are intelligent enough to build the consolidated hybrid API model that we have seen.
One API, Expose Anywhere
The ultimate goal is to publish APIs to the right audience with the right enterprise policies, right amount of security, and just the right amount of governance. The motto here is scale when you can, own what you must. What is the right amount for you? It depends on who your developers are, where your APIs are located now, and what sort of security and compliance requirements you have.
The concept of One API is to publish and be available in multiple places, accessed by multiple audiences (internal developers/applications, external developers, and partners) and be available for multiple channels (mobile, social, devices, etc.). All demand a different experience, which is where the hybrid model really excels.
So how does it actually work? In a hybrid API management deployment the API traffic comes directly to the Enterprise and the API metadata is available in two places: on premise and in the cloud.  The API metadataavailable from an on-premise portal is usually targeted to an internal developer.
Here the metadata and API documentation might be slightly different – an internal developer may require a different response format (XML for instance) for integration with internal systems and have a different access mechanism (API keys or internal credential) compared to an external or zero-trust developer. In this case this means that API traffic never goes to the cloud or any developer portal for that matter – this is often a point of confusion in the hybrid model.
Metadata that is available in the cloud would be described differently and use common standards for access such as OAuth and JSON, with rich community features to encourage the adoption of APIs. While the endpoint information is advertised in the cloud, the traffic itself is sent directly to the Enterprise datacenter, with policies enforced by an API gateway. Also, the UX and the registration process is lighter and faster to attract wider audience.
Hybrid SaaS API Management
This allows a number of different benefits for the Enterprise – they have increased control over the API definitions they choose to advertise to external developers and zero-trust developers can interact in a shared cloud that provides API metadata for a collection of APIs – public developers can sign in once and get access for a set of useful tools. Further, runtime traffic enforcement is always handled by the Enterprise, providing full visibility into API transactions as well as the API responses themselves.
52 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
Software Defined Perimeter for API Security (And anything else)
By Blake Dournaee on September 29, 2015
SDP is nascent but beginning to get coverage from the likes of Gartner. If you are new to it, the concept is similar to an on-demand, point-to-point application level VPN.
It’s main claim to fame is its ability to create the equivalent of an air-gapped network from a security posture perspective. It’s a great protocol for protecting everything from thick client applications to Web APIs. Further, it allows enterprises to push corporate applications directly into the public cloud – and feel good about it, which is a big deal given the ability of the public cloud to reduce operational costs.
19 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
Extending the trust from the device to the cloud
By Divya Kolar on October 6, 2014
To enable radically new user experiences, Intel Labs has been exploring system security and privacy with the goal that any device that computes on Intel conducts its operations in a secure and privacy preserving manner. Last year, Intel published the architecture for IntelÂŽ Software Guard Extensions (IntelÂŽ SGX). This technology allows software to instantiate a protected region of its address space known as an enclave, which provides confidentiality and integrity protection for data and code against potentially malicious privileged code or hardware attacks such as memory probes.
Tumblr media
An Enclave within the Application’s Virtual Address Space
Today, we are excited to see Microsoft Research demonstrate how IntelÂŽ SGX could be utilized to extend the protections into the cloud to preserve trust in critical workloads.
The technical community appears to agree that this is an interesting combination of software and hardware security as their paper won a best-paper award during the 11th USENIX Symposium on Operating Systems Design and Implementation.Andrew Baumann from Microsoft Research explains“When I store my data on my computer, I know that, with reasonable precaution, I can keep anyone else from accessing that data. However, when I store my data in the cloud, I must trust not only the cloud provider, but also the cloud provider’s operations staff and the legal authorities with jurisdiction over the cloud provider’s computers. This creates a huge friction on the movement of data and computing to the cloud.” Intel® SGX provides a key technology to address these concerns.
Matthew Hoekstra, Director of the Security Solutions Lab in Intel Labs, who supported Microsoft in this research notes that “this has been an excellent collaboration which gave Intel the opportunity to consider compelling new usages of Intel® SGX envisioned by some of the world’s top software experts”. Congratulations to the researchers at Microsoft on this outstanding achievement, we look forward to the additional creative ideas for the use of Intel® SGX that will be proposed by technology innovators in the coming months and years.
38 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
Intel and AT&T to Drive Research to Enable Software Defined Networking
By wen-hann wang on October 20, 2014
Software defined networking (SDN) offers a new approach to network management and programmability for the purpose of enabling more flexible network architectures and agile service deployments. SDN introduces new standards and mechanisms to manage networks and quickly introduce new functionality through centralized visibility and network control.
While SDN shows considerable promise, carrier networks remain a difficult challenge since user populations differ widely in density, physical surroundings can vary dramatically, and networks are large in scale. In order to support these diverse requirements, today’s carrier networks use a complex mix of communication technologies and dedicated equipment.  Carrier providers are also exploring Network Function Virtualization (NFV) which must be integrated into the overall network architecture.
To extend SDN to carrier networks (including NFV integration), Intel and AT&T have created a joint effort in collaboration with notable university researchers and open source contributors.  The Intel Strategic Research Alliance (ISRA), established by Intel and seeded with over $1M in 2014, aims to transform carrier network infrastructure by enabling software packet processing and flexible service architectures.  The ISRA is led by well-known computer networking researchers Sylvia Ratnasamy and Scott Shenker from University of California at Berkeley.
Additionally, AT&T has provided $1M in 2014 for ON.Lab, a non-profit organization led by Guru Parulkar that develops open source tools and platforms for SDN.  ON.Lab is developing ONOS, an open and distributed network operating system for SDN for service provider networks working on performance, scale-out design, high availability, and core features. Intel is also a member of ON.Lab and a collaborator in the ONOS work.
Intel’s ISRA on SDN is hosted by UC Berkeley, but also includes researchers from Stanford University (Nick McKeown),Princeton University (Jennifer Rexford), Carnegie Mellon University (Vyas Sekar), and Swiss Federal Institute of Technology in Lausanne or EPFL (Katerina Argyraki).  The focus areas of the alliance include novel approaches to adapt SDN to carrier networks, efficient software packet processing, and flexible service architectures for scaled networks.
Intel has already been active in accelerating SDN efforts through its own technology innovations and collaborations with customers and ecosystem partners.  Intel’s most recent architectural developments have focused on increasing performance to support the demand for growing service capabilities.  Intel is also a leading contributor to open source projects and industry standards like Open Daylight, ETSI NFV, and Open Networking Foundation.
38 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
Top 10 ways Intel drives High Performance Computing Democratization in 2016
By James Reinders on December 18, 2015
Democratization of High Performance Computing (HPC) is more than a vision for us at IntelÂŽ, it is a passion.
I’ve personally been awestruck by the amazing expansion of HPC uses in the past decade, yet many businesses have yet to utilize HPC. When you consider the positive effects that HPC is having on so many fields, it is easy to understand how HPC is a catalyst to lift every area of science, engineering and research. Democratization of HPC leads to more lift, and hence our passion for it.
Intel’s continued leadership in HPC, with products such as our Intel Xeon® processors, Intel Xeon Phi™ coprocessors and the recently launched Intel Omni-Path Fabric, has formed the foundation of the strong HPC ecosystem we enjoy today while invigorating the further democratization of HPC.
Our passion to help drive the democratization of HPC is exemplified by many things.  Here is my list of ten things which caught my attention as being most significant as we enter 2016, in no particular order:
Code Modernization: Intel helps make “code modernization” more than just a hot topic by providing events and training (attended by tens of thousands of developers around the world in this past year), and many valuable resources, including online training, on the web. The key to me is how this helps us all “Think Parallel”. This was reinforced to me when, as part of our Code Modernization work in 2015, we ran a student competition. When the grand prize winner, who had reduced the application run time from 45 hours to under 9 minutes, was asked what the greatest tool he utilized, he said ‘my brain.’ I hope, by our code modernization efforts, to train more of our brains to “Think Parallel”. This is an important dialogue that will continue strongly in 2016.
OpenHPC: Intel is a founding member and makes significant contributions to the OpenHPC community. OpenHPC is driven by a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters. This in turn, makes deploying an HPC system easier. I’ve enjoyed the early debates among community members as to the best solutions for OpenHPC to embrace. The debates will help us all do better, and I look forward to seeing what 2016 brings as the community forges ahead and grows.
Pearls Books: Learning by examples is critical in order to build expertise in any subject. When the examples come from world experts who share the challenges and successes of effectively using parallel programming, the learning available is substantial. I enjoyed editing these volumes both because of the experts with whom I was privileged to work, and the lessons I learned as well. High Performance Parallelism Pearls Volumes 1 and 2 are definitely worth reading to help in our drive to “Think Parallel”.
Free Libraries for All: Yes, you can get four of the best libraries anywhere for free, no strings attached. Intel Math Kernel Library, Intel Data Analytics Library, Intel Integrated Performance Primitives, and Intel Threading Building Blocks can all be obtained at a single website for community licenses from Intel.  The libraries are also part of Intel Parallel Studio XE 2016 as well, so you can choose free with community support or purchased with direct-from-Intel support. Either way, these libraries are very powerful.
Free Tools for instructors, classroom, academic researchers and open source work: Intel’s award winning tools lead the industry in support for high performance programming. They are worth the price. But, if youqualify for free access then you have no excuse to not be using these incredibly powerful libraries.
Big Data and Machine Learning: The Intel Data Analytics Acceleration Library (DAAL) premiered in 2015 and has proven its value in accelerating Big Data processing including Machine Learning. Most Big Data and Machine Learning work is done on Intel-based machines, and Intel DAAL gives that work an additional boost.
Intel Parallel Studio XE 2016: With new features like Vectorization Advisor, MPI Snapshot and Intel Data Analytics Acceleration Library (DAAL), Intel’s industry leading tools continue to give software developers the best capabilities to optimize performance for Intel architecture.
Second Generation Intel Xeon Phi Processors: With our first three systems deployed outside Intel using pre-production “Knights Landing” processors in 2015, the excitement and anticipation for the Second Generation of Intel Xeon Phi processors has never been higher. The revolutionary move to put Intel’s many-core architecture into a processor will help make the high levels of performance available to an unprecedented range of users in 2016. My personal goal to help this is to publish a useful book about it, with my colleagues at Intel, by the middle of the 2016.
Python: Introduction of highly optimized Python support – specifically high performance SciPy and NumPy. Information about the current optimized Python distribution is at http://bit.ly/intel-python.
Blueprint for your clusters: Intel’s HPC Scalable System Framework is a flexible blueprint for developing high performance, balanced, power-efficient and reliable systems. The performance and consistency this brings makes HPC easier to deploy, and that helps expand usage of HPC. I’m excited about the systems we will see come to market, in 2016, which utilize our HPC Scalable System Framework to help meet the needs of all types of HPC users, new and old.
53 notes ¡ View notes
huhnn-blog ¡ 9 years ago
Text
The Enchantress of Numbers: Remembering Ada Lovelace, the Mother of Tech Marketing
By Kim Gerardi on December 10, 2015
As we look ahead to the future of tech, marketing, and diversity in STEM fields, it’s important to take a glance back. History can tell us a lot about who we are now, but most importantly it can illustrate ways that we can get from where we are to where we want to be. One of my personal inspirations is the incomparable Ada Lovelace. Ada is often called the first programmer, but she was so much more than that — and I suggest that her most important role was as the first tech marketer. Modern leaders, innovators, and storytellers in tech can learn a lot from her and her story.
“The Enchantress of Numbers”
Tumblr media
Ada Lovelace
Born Augusta Ada Byron in 1815, Ada’s parents were a famous poet and a devout scientist — virtually fating her to have a remarkable life. Her father, George Gordon Byron (or more commonly, Lord Byron), was only married to her mother for a year before they separated when Ada was just a month old. Ada’s mother, Anne Isabella Noel Byron, was a devoted lifelong student of math and science. Afraid that Ada would inherit her father’s extreme emotions and wild behavior, Anne kept her daughter on a strict regimen of math and science. Art and poetry were forbidden.
From this strict upbringing, Ada grew up to be an educated woman with a brilliant mathematical mind that was evident to everyone around her. When she was 18, she was introduced to a renowned mathematician and inventor, Charles Babbage, who was immediately impressed by her. Despite her mother’s attempts to quell her imagination, Ada was still filled with ideas about the future of science. Later Charles would call her “The Enchantress of Numbers,” observing that she threw “her magical spell around the most abstract of Sciences and has grasped it with a force which few masculine intellects could have exerted over it.”
Marketing the Engine of Change
Charles showed Ada his Difference Engine, a counting machine he’d been hired to design by the British government. Ada was fascinated by the machine and continued to write letters back and forth with Charles for the next nine years; during this time, Babbage updated his Difference Engine into a more general-purpose computer he called the Analytical Engine. After the Italian engineer Luigi Menabrea wrote a long paper about the Analytical Engine, Charles convinced Ada to translate it into English and improve it with a few footnotes. However, the final English translation was nearly three times as long, and illustrated a sort of insight into the possibilities of the technology that had eluded even the inventor.
It may have been Babbage who did the engineering and Menabrea who first described it, but it was Ada who was able to take a description of a machine and contextualize it, articulate the human story behind it, and envision what it might mean for the future. Ada put the pieces together and put a human face on computing technology — she was the first tech marketer, and in this regard she was a visionary.
A Testament to the Power of Diversity
Ada was able to do this in an era in which the idea of women in STEM was virtually unthinkable. To give some context, in the same time and place when Ada was articulating the human face of technology, the Brontë sisters had to write their novels under men’s names in order to get published. Ada’s story is but one of countless illustrations of just how important it is to bring diversity to STEM. You never know where the next revolutionary idea is going to come from.
Many historians have pointed out that Ada didn’t add much to the way counting machines were built, and that she probably didn’t write any of the punch card programs the machines ran. As a brilliant mathematician, it’s likely she had the skill set to do so; however, her true skill was that of a visionary and as a connector of the dots between the hard sciences and the humanities.
Bringing Poetry into STEM
Tumblr media
Ada Lovelace
Whenever I meet with young STEM students, my advice for them is just the opposite of Ada’s mother’s guidance — I encourage them to expand their studies by taking classes in poetry, literature, and other forms of creativity that are vastly different from mathematics. Although the literary arts may seem unrelated on the surface, they get you to think differently. Fostering the creative side of your brain can help you look at problems in new ways.
Even more, a proficiency in language empowers you to communicate your insights effectively to others, which is a critical skill. Ada’s ability to connect the dots between the technological innovation and the possibilities it held was invaluable. A century later, the personal computer first appeared as a consumer product in the 1969 Neiman Marcus holiday fantasy gift catalogue as a $10,000 recipe storage machine. Today, look at all that has been possible because of the personal computer.  Creative possibility thinking and skilled communication are skills to acquire in STEM fields to be sure.
28 notes ¡ View notes