Tumgik
#there are no windows. the computers onboard see well enough. we have sensors for that kind of thing.
quietwingsinthesky · 6 months
Text
if i ever end up writing even’s actual first encounter and adventure with the doctor, one of the running themes is going to be how there aren’t any windows on the ship.
which doesn’t seem like such a big thing at first, but that’s from our point of view, or the doctor’s, because we (and him) know the size of the universe. we know what space is, maybe not entirely, but enough to know the shape of it, yeah? we know what stars look like even when their light is a trillion years and miles away. but there are no windows on this ship. and even has never seen the night sky, has never seen a star or a planet or just the empty space that’s separated from them by feet of metal and a great deal of luck. even has lived their whole life inside, and space is not a thing they can see or touch. it’s an abstract threat beyond their walls. they could not imagine the enormity of it if they tried.
they don’t know the shape of the ship either. imagine someone let you run through a maze and then told you to draw it. you could draw the corridors you walked through and the dead-ends you ran into, but could you for certain say that you ever found the edges of it? that know the walls on the outside look like the walls on the inside? how big is it? and really, what you should be imagining is that your maze is one of a dozen different mazes all tied together with rubber bands, and none of you actually know what the whole thing looks like, and you don’t have time to talk through the walls to figure that out because if you stop moving for too long, the food dispenser at the end won’t give you anything despite reaching your goal because you were too slow, better try harder next time, stop talking and start running.
even isn’t surprised that the tardis is bigger on the inside. it doesn’t hit them until the doctor lets them see the ship they were on from the outside. like a farewell wave, opening the doors of the tardis as she orbits the ship, and even takes in the shape of it first. (they can’t figure out where they lived, where they worked, from the outside. they don’t recognize any of it.)
but then they see everything else, beyond the ship, while the doctor is standing behind them and saying something reassuring, ‘they’ll be alright without you, don’t worry about them, we fixed everything’, absently, kindly, because he knows they need a moment alone to say goodbye but someone has to stand at the controls and the silence gets to him a little too much. doesn’t see that even’s eyes are so, so wide staring beyond the ship at the universe around it.
it’s too big. they panic. they shut themselves inside the tardis.
that’s what gives the doctor pause. makes him waver, here, because even’s good companion material, they’ve got that spark in them that makes them want to help, whatever it takes, (this is what will undo them, eventually.) and he doesn’t want to leave them there. but you can’t just take something out of its natural habitat and expect it to flourish. that’s how you get wilting leaves and patchy coats and enough stress to kill something from heartbreak alone.
‘i can take you back,’ he offers. it’s the last time that’ll ever be true, but if he knew that when he said it, it’d be a very different kind of story. so he doesn’t.
even is shaking. tearing up. scared. elated? hiccuping on little gasps of air. the stars are beautiful, and terrifying, and now that even knows they exist, they can never go back to before they knew.
the doctor is cruel like that. he wants to show you the universe.
but here’s what’s true now and will be true forever: even doesn’t want to go back. i mean, god, could you blame them. one day, in a few years/decades/centuries/after the long way round to the end of the universe and the short trip back, he’s going to tell them that they can either say to his face that they’d rather he’d left them on that ship or they can stop adding it to his list of sins. they won’t be able to.
so they say no.
and they pull the doors back open just a crack, wide enough for one eye, small enough to shut again with the tremble of a hand. and they peek back out at the universe they’ve been living in. they don’t notice the ship, as the tardis breaks her orbit, speeding further and further away to a destination its passengers will never see.
that’s why there are no windows on the ship. well, that, and it wasn’t very well-designed in the first place.
4 notes · View notes
ours-is-the-fury · 4 years
Text
By The Moons I Honour Thee
Contains: SWEARING, VIOLENCE, POSSIBLE NSFW THEMES
CHAPTER 2...
The door slid open onto the busy street and Jules quickly dashed through followed by a stream of bullets. The flesh avatar followed round him onto the pavement before a red energy bolt made a neat hole through his head. The puddle of flesh sent the last of the people screaming and Jules sprinting.
A voice came through on his earpiece. “What have you done?” the reptilian voice asked “There issss a Church landing sssship here. They are armed!”
“Get the ship in the air now, lock onto my communicator and teleport me onboard!” Jules hung up, still running. Fuck, fuck, fuck, he thought to himself. They couldn’t be allowed get hold of the Core, they’d know everything. The galaxy would become a police state run by the Church. That couldn’t happen. The sound of the Selene’s engines interrupted his thoughts; the ship was cruising amidst the air traffic between the buildings.
“Prepare to teleport.” Rakara announced over the com. Jules stopped running once he was on board the Selene, suddenly taste a metallic tang.
“Get us out of here, fast!” he shouted up at Rakara. The cargo bay shook as she launched into a vertical climb, triggering the artificial gravity’s sudden activation.  
“Whereeee are we going?” Rakara queried, flicking switches, and spooling up the slipstream engines. The ladder creaked and she looked round to see Jules climbing up.
“Get us into Sontaran space, they won’t come looking for us there.”
“Ssssontaran sssspace?!” Jules noted a new expression on the Ice Warriors face: surprise. “Are you nutssss?!”
“I very well might be, Rakara, but that’s also an order so please, jump.”
Rakara pulled down on the overhead lever and the Selene jumped into a Slipstream tunnel.
 *****
 Jules sat back in the pilot seat, his classic earth sunglasses on, admiring at the bright lights of slipstream while listening to the song Rakara was singing in the shower. Ancient North Martian was quite a beautiful melody when sung properly and the singer-turned bounty hunter hadn’t lost any of her talent.
“Computer, is the Cloak repaired enough for sensor shielding?” He adjusted his sunglasses.
“Affirmative, Captain.” The monotone voice replied. If they couldn’t be completely invisible then being undetectable will have to do, Jules thought. He just hoped no-one would look out of their windows.
Rakara climbed up the ladder dressed in just a towel, in the process of drying her dreadlock hair. Her large, crimson ovula eyes blinked repeatedly in the bright light. “I have come to sleep, Julessss.” She walked over to the bed built into the wall just behind the cockpit. “If you don’t mind.”
“Of course.” he flicked a switch and the filters dropped over the windows, blocking all light, before removing his sunglasses and pinching his nose. “I’ll leave you to it. By the way, your singing is amazing.” He smirked and climbed down the ladder. He almost caught a glimpse of her cheeks paling with a slight blush before the pillow was thrown at his head. Jules trudged towards his bunk built into the side of the small cargo bay. The bay was built between the engines and wasn’t quite big enough, Jules thought. The ship itself was originally built for one occupant and no cargo. When time travel capabilities, cloaking and tremendous firepower come together, little space is left. But the Selene was home. The cargo bay never really contained much cargo, it was just the empty space where stuff went; his bed for instance, and the shower. The weapon locker took up most of the space on the left wall. Against the back wall sat banks of servers to make the ship’s Computer run as fast as possible. The previous owner had been a little eccentric, claiming that he had to defuse an ancient bomb before he could fly it.  He was, however, a raving alcoholic and a gambler and it didn’t take Jules long to win the ship from him at the Black Pyramid Casino. The cloaking device made it a perfect vessel for bounty hunting, and the time circuits were a plus.  Jules walked over to the weapon locker and pulled down his sword. The last time he’d used it was when he rescued Rakara from slavers. He drew the sword with a flourish and the bronze-coloured blade gleamed in the light. Its single wicked-sharp edge was cut off at an angle at the top, giving it a slightly industrial look compared to most swords. The blade was made from Metalert, a reinforced alloy of Dalekanium, which meant it was strong and had a keen enough blade to cut through most enemies, including Daleks themselves if he could get close enough.
Out of nowhere, the engine pitch suddenly changed and Jules ran for the ladder. When he reached the top, he saw Rakar already at the controls. “We have dropped out of sssslipsssstream - look.” She pointed out the window.
Outside was a warzone. Sontaran command ships and scout ships were under attack from a giant fleet of Cyber cruisers...
2 notes · View notes
planetarduino · 5 years
Text
Get started with machine learning on Arduino
This post was originally published by Sandeep Mistry and Dominic Pajak on the TensorFlow blog.
Arduino is on a mission to make machine learning simple enough for anyone to use. We’ve been working with the TensorFlow Lite team over the past few months and are excited to show you what we’ve been up to together: bringing TensorFlow Lite Micro to the Arduino Nano 33 BLE Sense. In this article, we’ll show you how to install and run several new TensorFlow Lite Micro examples that are now available in the Arduino Library Manager.
The first tutorial below shows you how to install a neural network on your Arduino board to recognize simple voice commands.
Tumblr media
Example 1: Running the pre-trained micro_speech inference example.
Next, we’ll introduce a more in-depth tutorial you can use to train your own custom gesture recognition model for Arduino using TensorFlow in Colab. This material is based on a practical workshop held by Sandeep Mistry and Dan Coleman, an updated version of which is now online. 
If you have previous experience with Arduino, you may be able to get these tutorials working within a couple of hours. If you’re entirely new to microcontrollers, it may take a bit longer. 
Tumblr media
Example 2: Training your own gesture classification model.
We’re excited to share some of the first examples and tutorials, and to see what you will build from here. Let’s get started!
youtube
Note: The following projects are based on TensorFlow Lite for Microcontrollers which is currently experimental within the TensorFlow repo. This is still a new and emerging field!
Microcontrollers and TinyML
Microcontrollers, such as those used on Arduino boards, are low-cost, single chip, self-contained computer systems. They’re the invisible computers embedded inside billions of everyday gadgets like wearables, drones, 3D printers, toys, rice cookers, smart plugs, e-scooters, washing machines. The trend to connect these devices is part of what is referred to as the Internet of Things.
Arduino is an open-source platform and community focused on making microcontroller application development accessible to everyone. The board we’re using here has an Arm Cortex-M4 microcontroller running at 64 MHz with 1MB Flash memory and 256 KB of RAM. This is tiny in comparison to Cloud, PC, or mobile but reasonable by microcontroller standards.
Tumblr media
Arduino Nano 33 BLE Sense board is smaller than a stick of gum.
There are practical reasons you might want to squeeze ML on microcontrollers, including: 
Function – wanting a smart device to act quickly and locally (independent of the Internet).
Cost – accomplishing this with simple, lower cost hardware.
Privacy – not wanting to share all sensor data externally.
Efficiency – smaller device form-factor, energy-harvesting or longer battery life.
There’s a final goal which we’re building towards that is very important:
Machine learning can make microcontrollers accessible to developers who don’t have a background in embedded development 
On the machine learning side, there are techniques you can use to fit neural network models into memory constrained devices like microcontrollers. One of the key steps is the quantization of the weights from floating point to 8-bit integers. This also has the effect of making inference quicker to calculate and more applicable to lower clock-rate devices. 
TinyML is an emerging field and there is still work to do – but what’s exciting is there’s a vast unexplored application space out there. Billions of microcontrollers combined with all sorts of sensors in all sorts of places which can lead to some seriously creative and valuable TinyML applications in the future.
What you need to get started
An Arduino Nano 33 BLE Sense board
A Micro USB cable to connect the Arduino board to your desktop machine
To program your board, you can use the Arduino Web Editor or install the Arduino IDE. We’ll give you more details on how to set these up in the following sections
The Arduino Nano 33 BLE Sense has a variety of onboard sensors meaning potential for some cool TinyML applications:
Voice – digital microphone
Motion – 9-axis IMU (accelerometer, gyroscope, magnetometer)
Environmental – temperature, humidity and pressure
Light – brightness, color and object proximity
Unlike classic Arduino Uno, the board combines a microcontroller with onboard sensors which means you can address many use cases without additional hardware or wiring. The board is also small enough to be used in end applications like wearables. As the name suggests it has Bluetooth LE connectivity so you can send data (or inference results) to a laptop, mobile app or other BLE boards and peripherals.
Tip: Sensors on a USB stick – Connecting the BLE Sense board over USB is an easy way to capture data and add multiple sensors to single board computers without the need for additional wiring or hardware – a nice addition to a Raspberry Pi, for example.
TensorFlow Lite for Microcontrollers examples
The inference examples for TensorFlow Lite for Microcontrollers are now packaged and available through the Arduino Library manager making it possible to include and run them on Arduino in a few clicks. In this section we’ll show you how to run them. The examples are:
micro_speech – speech recognition using the onboard microphone
magic_wand – gesture recognition using the onboard IMU
person_detection – person detection using an external ArduCam camera
For more background on the examples you can take a look at the source in the TensorFlow repository. The models in these examples were previously trained. The tutorials below show you how to deploy and run them on an Arduino. In the next section, we’ll discuss training.
How to run the examples using Arduino Create web editor
Once you connect your Arduino Nano 33 BLE Sense to your desktop machine with a USB cable you will be able to compile and run the following TensorFlow examples on the board by using the Arduino Create web editor:
Tumblr media
Compiling an example from the Arduino_TensorFlowLite library.
Focus on the speech recognition example: micro_speech
One of the first steps with an Arduino board is getting the LED to flash. Here, we’ll do it with a twist by using TensorFlow Lite Micro to recognise voice keywords. It has a simple vocabulary of “yes” and “no”. Remember this model is running locally on a microcontroller with only 256KB of RAM, so don’t expect commercial ‘voice assistant’ level accuracy – it has no Internet connection and on the order of 2000x less local RAM available.
Note the board can be battery powered as well. As the Arduino can be connected to motors, actuators and more this offers the potential for voice-controlled projects.
Tumblr media
Running the micro_speech example.
How to run the examples using the Arduino IDE
Alternatively you can use try the same inference examples using Arduino IDE application.
First, follow the instructions in the next section Setting up the Arduino IDE. In the Arduino IDE, you will see the examples available via the File > Examples > Arduino_TensorFlowLite menu in the ArduinoIDE.
Select an example and the sketch will open. To compile, upload and run the examples on the board, and click the arrow icon:
Tumblr media
For advanced users who prefer a command line, there is also the arduino-cli.
Training a TensorFlow Lite Micro model for Arduino
Tumblr media
Gesture classification on Arduino BLE 33 Nano Sense, output as emojis.
Next we will use ML to enable the Arduino board to recognise gestures. We’ll capture motion data from the Arduino Nano 33 BLE Sense board, import it into TensorFlow to train a model, and deploy the resulting classifier onto the board.
The idea for this tutorial was based on Charlie Gerard’s awesome Play Street Fighter with body movements using Arduino and Tensorflow.js. In Charlie’s example, the board is streaming all sensor data from the Arduino to another machine which performs the gesture classification in Tensorflow.js. We take this further and “TinyML-ifiy” it by performing gesture classification on the Arduino board itself. This is made easier in our case as the Arduino Nano 33 BLE Sense board we’re using has a more powerful Arm Cortex-M4 processor, and an on-board IMU.
We’ve adapted the tutorial below, so no additional hardware is needed – the sampling starts on detecting movement of the board. The original version of the tutorial adds a breadboard and a hardware button to press to trigger sampling. If you want to get into a little hardware, you can follow that version instead.
Setting up the Arduino IDE
Following the steps below sets up the Arduino IDE application used to both upload inference models to your board and download training data from it in the next section. There are a few more steps involved than using Arduino Create web editor because we will need to download and install the specific board and libraries in the Arduino IDE.
Tumblr media
Download and install the Arduino IDE from: https://arduino.cc/downloads
Open the Arduino application you just installed
In the Arduino IDE menu select Tools > Board > Boards Manager…
Search for “Nano BLE” and press install on the board 
It will take several minutes to install
When it’s done close the Boards Manager window
Tumblr media
Now go to the Library Manager Tools > Manage Libraries…
Search for and install the Arduino_TensorFlowLite library
Next search for and install the Arduino_LSM9DS1 library:
Tumblr media
Finally, plug the micro USB cable into the board and your computer
Choose the board Tools > Board > Arduino Nano 33 BLE
Choose the port Tools > Port > COM5 (Arduino Nano 33 BLE) 
Note that the actual port name may be different on your computer
There are more detailed Getting Started and Troubleshooting guides on the Arduino site if you need help.
Streaming sensor data from the Arduino board
First, we need to capture some training data. You can capture sensor data logs from the Arduino board over the same USB cable you use to program the board with your laptop or PC.
Arduino boards run small applications (also called sketches) which are compiled from .ino format Arduino source code, and programmed onto the board using the Arduino IDE or Arduino Create. 
We’ll be using a pre-made sketch IMU_Capture.ino which does the following:
Monitor the board’s accelerometer and gyroscope 
Trigger a sample window on detecting significant linear acceleration of the board 
Sample for one second at 119Hz, outputting CSV format data over USB 
Loop back and monitor for the next gesture
The sensors we choose to read from the board, the sample rate, the trigger threshold, and whether we stream data output as CSV, JSON, binary or some other format are all customizable in the sketch running on the Arduino. There is also scope to perform signal preprocessing and filtering on the device before the data is output to the log – this we can cover in another blog. For now, you can just upload the sketch and get sampling.
To program the board with this sketch in the Arduino IDE:
Download IMU_Capture.ino and open it in the Arduino IDE
Compile and upload it to the board with Sketch > Upload
Visualizing live sensor data log from the Arduino board
With that done we can now visualize the data coming off the board. We’re not capturing data yet this is just to give you a feel for how the sensor data capture is triggered and how long a sample window is. This will help when it comes to collecting training samples.
In the Arduino IDE, open the Serial Plotter Tools > Serial Plotter
If you get an error that the board is not available, reselect the port:
Tools > Port > portname (Arduino Nano 33 BLE) 
Pick up the board and practice your punch and flex gestures
You’ll see it only sample for a one second window, then wait for the next gesture
You should see a live graph of the sensor data capture (see GIF below)
Tumblr media
Arduino IDE Serial Plotter will show a live graph of CSV data output from your board.
When you’re done be sure to close the Serial Plotter window – this is important as the next step won’t work otherwise.
Capturing gesture training data 
To capture data as a CSV log to upload to TensorFlow, you can use Arduino IDE > Tools > Serial Monitor to view the data and export it to your desktop machine:
Reset the board by pressing the small white button on the top
Pick up the board in one hand (picking it up later will trigger sampling)
In the Arduino IDE, open the Serial Monitor Tools > Serial Monitor
If you get an error that the board is not available, reselect the port:
Tools > Port > portname (Arduino Nano 33 BLE) 
Make a punch gesture with the board in your hand (Be careful whilst doing this!)
Make the outward punch quickly enough to trigger the capture
Return to a neutral position slowly so as not to trigger the capture again 
Repeat the gesture capture step 10 or more times to gather more data
Copy and paste the data from the Serial Console to new text file called punch.csv 
Clear the console window output and repeat all the steps above, this time with a flex gesture in a file called flex.csv 
Make the inward flex fast enough to trigger capture returning slowly each time
Note the first line of your two csv files should contain the fields aX,aY,aZ,gX,gY,gZ.
Tumblr media
Linux tip: If you prefer you can redirect the sensor log output from the Arduino straight to a .csv file on the command line. With the Serial Plotter / Serial Monitor windows closed use:
 $ cat /dev/cu.usbmodem[nnnnn] > sensorlog.csv
Training in TensorFlow
We’re going to use Google Colab to train our machine learning model using the data we collected from the Arduino board in the previous section. Colab provides a Jupyter notebook that allows us to run our TensorFlow training in a web browser.
Tumblr media
Arduino gesture recognition training colab.
The colab will step you through the following:
Set up Python environment
Upload the punch.csv and flex.csv data 
Parse and prepare the data
Build and train the model
Convert the trained model to TensorFlow Lite
Encode the model in an Arduino header file
The final step of the colab is generates the model.h file to download and include in our Arduino IDE gesture classifier project in the next section:
Tumblr media
Let’s open the notebook in Colab and run through the steps in the cells – arduino_tinyml_workshop.ipynb
Classifying IMU Data
Next we will use model.h file we just trained and downloaded from Colab in the previous section in our Arduino IDE project:
Open IMU_Classifier.ino in the Arduino IDE.
Create a new tab in the IDE. When asked name it model.h
Open the model.h tab and paste in the version you downloaded from Colab
Tumblr media
Upload the sketch: Sketch > Upload
Open the Serial Monitor: Tools > Serial Monitor
Perform some gestures
The confidence of each gesture will be printed to the Serial Monitor (0 = low confidence, 1 =  high confidence)
Congratulations you’ve just trained your first ML application for Arduino!
Tumblr media
For added fun the Emoji_Button.ino example shows how to create a USB keyboard that prints an emoji character in Linux and macOS. Try combining the Emoji_Button.ino example with the IMU_Classifier.ino sketch to create a gesture controlled emoji keyboard ?.
Conclusion
It’s an exciting time with a lot to learn and explore in TinyML. We hope this blog has given you some idea of the potential and a starting point to start applying it in your own projects. Be sure to let us know what you build and share it with the Arduino community.
For a comprehensive background on TinyML and the example applications in this article, we recommend Pete Warden and Daniel Situnayake’s new O’Reilly book “TinyML: Machine Learning with TensorFlow on Arduino and Ultra-Low Power Microcontrollers.”
Get started with machine learning on Arduino was originally published on PlanetArduino
0 notes
beacon-of-chaos · 8 years
Text
Defenders of Aura - A Battle Century G Campaign Diary
Sorry for the delay, laser eye surgery makes staring at large blocks of text difficult. Session 7 Sadly Juyon's player was not able to make this session. Juyon was assumed to be working in the background during the game. We begin this session where we left off: Floating in the middle of space having been saved from death by an alien vessel. The Naul (a race of horned humanoids) tell us that they were tracking the ship that attacked us and when they saw we were in trouble they came to help. Our ship is being held together by a tractor beam and not much else, so after the initial panic has worn off, the ship's crew and the Naul begin working together to fix things. Fiona and Ax lend a hand fixing the holes in the hull (Fiona's mech in particular being designed for this kind of work), Spectre and Eric head to the medical bay, and Sinclair helps getting the ship's computer up and running. The android is given two options for the ship's computer; either try and repair it as best as he can, or try and bypass it using his own brain to run the ship until full repairs can be done. Sinclair's response? Sinclair: *gasp* I've always wanted to be a ship's computer! However, after the GM reports the difficulty level of the two options, Sinclair decides to go for the longer, but less risky, method of repairing the computer. Perhaps next time. After completing their respective tasks the group meets up and finds one of the nauls in one of the corridors, looking confused as she places her hand on one of the walls. She says she doesn't understand why she cannot detect the "Life Energy" of our ship. It seems Naul ships are organic in nature, grown rather than built. She turns to Sinclair in fascination at the concept of a lifeform that was built rather than born. She asks many questions and expresses an interest in learning more about our human ways. We ask her questions too, especially about what appears to be the magic that the Naul are using. Right on cue, a wormhole appears next to us and a naul comes out to invite us onto the naul ship to speak with their captain. When we enter the wormhole we exit in what appears to be a large forest. We look up and find ourselves staring up at what appears to be open space. GM: Everyone roll 1d10 plus willpower to avoid freaking out. Fiona: *rolls a 10* It's a window. I've seen them before. Sinclair: *rolls a 1* Oh my god sensors don't detect any glass what is happening ahhhh! It's actually a projection of the outside. So the ship is space Hogwarts being run by space elves. Neato! The captain is an older-looking naul with a cane. He greets us and tells us that he is from Camelot, the planet with the human-alien alliance. The ship that attacked us was a member of the Ebon Order, a group of humans and nauls originally tasked with hunting the aliens that the cultists we fought in session 1 are supposedly working for. The Ebon Order went rogue not long after they entered dark space and since then have been attacking other ships indiscriminately for no apparant reason. The captain wants to enlist our help in capturing this rogue order. We ask about the Chinese ship we were supposed to be meeting. The captain tells us that they passed a human ship on the other side of Miranda that seemed abandoned. We decide we'll check it out after dealing with the Order. Our team and the captains of each ship get together to discuss plans to take the Ebon Order ship down. After checking the information from our recent battle, Eric notices something unusual about the enemy shields; when they lauched their mechs to counter ours, the shield frequency changed to allow low-speed objects to pass through briefly. This seems like our best shot of stopping them, so we plan to lure them in by pretending the Naul ship is broken down, then the Cruel Odysseus will come up behind them and catapult us towards the enemy ship as soon as the enemy mechs launch. Then we'll disable the enemy shields and engines in order to allow the Naul to board. It's a crazy, risky plan, but that's what we're here for! For additional power, Fiona builds an anti-spaceship sword for her mech out of scrap metal from the Odysseus and also finds a nuclear missile onboard. Why does this junky old ship have one of those? I don't know. So the bait is set and we wait. We don't wait for long, as the Ebon Order ship arrives quickly and immediately begins firing on the Naul ship. The Odysseus moves from its hiding place in the gas giant's cloud and we are launched in time to catch the shield at its weakest. We pass through, aiming to land at the rear of the ship between the engines, where we can do the most damage before we are noticed. The GM makes us roll for the landing. Sinclair once again makes a perfect landing (3 for 3!) while the others get scattered across the ship's aft. We go to town on whatever we can find that looks important; Sinclair hacks the engines to disable them while the others bring their weapons to bare against shield generators, weapons, and suspicious-looking chunks of metal. Soon enough we are set upon by enemy mechs. Fiona gets two artillery units appear close to her, Ax and Sinclair get two mid-range mechs, and Spectre ends up dueling the commander mech from the last fight, who's only communication is "DIE SCUM!". Fiona dodges between weapons arrays and the artillery hesitate, not wanting to blast apart their own ship. Sinclair shields Ax from attack while the rock star uses his bazooka and heavy-duty anti-mech cannon with impunity, blasting one mech clear into space and disabling the other. Fiona and Spectre perform a double team maneuveur against the commander, with Spectre pushing her towards Fiona who then leaps off a wall and down with her ridiculously large anti-spaceship sword, which smashes into the commander and the ship, causing all three mechs to crash through a section of the hull and into some kind of small hanger. Sinclair considers following Fiona and Spectre, but the hanger is already cramped and Ax's low speed makes him a sitting duck against the artillery mechs, who are now free to attack since most of the ship's systems around us have been destroyed. Sinclair repairs Ax's mech up to almost full health, but sadly a full salvo from each of the enemy mechs is enough to smash his mech to pieces. It survives at 1 hit point, due to GM fiat, just long enough for a retaliatory blast from Ax to put them at critical HP. Luckily, now that the shields are down, we get some support fire from the Odysseus, putting the remaining mechs out of commission. Ax's mech gives up the ghost (RIP Riggnarok) and Ax is badly injured. Sinclair ejects from his own mech to perform some emergency first aid on the rock star, using duct tape and superglue to hold his wounds together. Ax sadly retrieves a broken guitar neck from his cockpit as a reminder. Meanwhile, the three mechs in the hanger are striking each other at point blank range, each taking heavy damage. The commander runs a sword straight through Spectre and Eric's mech, forcing both of them to eject (RIP Eagle). Fiona strikes back with her jackhammer punch attack, but the commander is able to get one last attack in before her mech is destroyed and Fiona's mech is downed at the same time (RIP Arc Gear). Sinclair: Honestly, I take my eyes off you for one minute... By the time Fiona comes to, Spectre is gone. Eric informs her that the woman that the two of them fought has dragged him off. Sinclair and Ax meet up with Fiona and Eric and go to find the kidnapped doctor. Spectre comes round to find the mech commander standing over him. She's clearly injured with a bleeding side wound, but she's pointing a gun at him. She begins barking questions at him, strange questions like "Are you with them!?" and "Do you hear the voices?". Spectre informs her that he doesn't know what she's talking about, he's only here because he's looking for his father, Victor. The woman hesitates for a moment. "Yes, yes. You look like him. I'm taking you to see the captain." Spectre is then dragged off by his collar. He tries to fight back, but he's a scrawny scientist and she's a trained soldier so he doesn't get anything other than a bash on the head for his trouble. Meanwhile, the rest of Delta Team is on their way to the bridge to find Spectre and take over the ship. We get waylaid by soldiers so we need to take cover and try to fight back. Naturally the only team member with any non-mecha combat skills, Juyon, is not here. We take a few potshots from behind cover (Sinclair using a non-lethal stun gun, Ax using a flamethrower) but we're cornered. Thankfully we get help from some Naul soldiers who have been able to warp on board now that the shields are partially down. On the bridge, Spectre is brought before the captain, an older gentleman with a severe expression. Spectre asks about his father and the captain tells him that Victor worked with their order for some time, but eventually he left for parts unknown. There's more talk about them and the voices with no real explanation. The captain offers Spectre a place on his ship instead of Victor, asking him to finish his father's work. Before Spectre can answer, Delta Team and the Naul arrive. There are many naul guards around so we duck behind some consoles. The captain grabs Spectre to use as a human shield, effectively recinding his offer of employment. Fiona shoots the lights out, plunging the bridge into darkness. This probably would have been a good idea if we had thermal vision and the enemy didn't instead of, y'know, the other way around. Well, Sinclair has infrared vision so he drops a smokebomb to double blind everyone. So the entire bridge is a mess of soldiers staggering around trying to determine if who they just punched is friend or foe. Fun times. Fiona decides to try a bluff. She yells out to the enemy forces: Fiona: If you don't stand down we will fire a nuke at this ship and there will be no survivors! Cultist: No survivors is what we live for! [beat] Sinclair: Wait... what? Well, it was worth a shot. The injured mech commander tries to attack Fiona (who has decided that they are now rivals) but with her wound slowing her down, Fiona is able to simply pin and handcuff her. Sinclair administers some first aid to the prisoner. The captain seems to be trying to make a break for it, with Spectre in tow, when a wormhole opens andout steps the captain of the naul ship. The two captains seems to have some history as they exchange some quick banter. And then both of them draw beam swords and starting having an actual light sabre duel on the bridge! While we're getting all the Star Wars quotes out of our systems, the captain's elite guard, the Reaper, appears, lunging for the injured Ax. Ax does the only thing he can in this situation: he stabs the reaper with the neck of his broken guitar. Reaper: You... killed me... with a guitar. Ax: And there's nothing more Metal than that! After this, the naul captain wins his duel and the remaining guards surrender. We won! We did lose 60% of our mechs though, so it's a fair cost. The naul captain thanks us for our help and invites us back to Camelot to receive rewards. We accept but decide to check out the abandonded human ship they mentioned first. Upon arriving, however, we find that this is not the Chinese ship we were supposed to be meeting, and is from the old mining colony we passed last session. It's been abandonded for a long time, and already looted, so it's of no use to anyone. We're confused as to what happened to the Chinese ship, but we decide we'll deal with it later. We head to Camelot where we meet with the human president, who thanks us for a job well done. As thanks, we'll be getting access to some of their advanced technology to take back to Aura (and use to rebuild our mechs). He also agrees to help Spectre in finding any information on his father. We head back to Aura to report the missing ship. When we meet with Nina she tells us... that the Chinese ship has already arrived. What's more, they say they already met us and that we told them we were going to investigate a distress signal. And that research station we checked in with? Doesn't exist. What the-? Sinclair requests that the team be placed under quarantine until this matter can be sorted out and the team agrees. And there ends session 7! There's a large amount that I missed out because this was a busy session and this log is long enough, but it was easily my favourite of the campaign. I'm sure the action and plot twists don't come across as exciting and interesting with my writing style but I hope you enjoyed reading. Sessions 8&9 will hopefully be posted next week. Bonus quotes: http://www.giantitp.com/forums/shows...postcount=1239
1 note · View note
tech-battery · 4 years
Text
Galaxy Book Flex review: A pretty QLED laptop with a useful S Pen Samsung’s laptops are coming into their own.
Samsung has had a troubled past as a laptop maker. Its ATIV, Notebook and Odyssey machines were underwhelming, and in the late 2010s the company consolidated its efforts into the Galaxy Book line. Instead of competing against the usual PC makers like Dell, HP, Lenovo and ASUS, Samsung focused on making more mobile-friendly machines. It prioritized thin-and-light designs and great displays, and it added an onboard stylus, borrowing the features that made its smartphones the industry favorites they are today.
With the Galaxy Book Flex, Samsung continues to show improved self-awareness by showcasing its superior display tech. The Flex is the first laptop with a QLED panel, which promises more colors, deeper blacks and greater brightness. It also comes standard with an S Pen and a wireless-charging pad built into the trackpad. Two models -- 13 inch and 15 inch -- are available, and we received the smaller version, which costs $1,349. The larger is just $50 more, and both are available today.
Engadget Score
Pros
Beautiful build
Useful onboard S Pen
Trackpad can wirelessly charge other devices
Comfortable keyboard
Cons
Limited configurations
Middling battery life
Summary
The Galaxy Book Flex is a gorgeous, powerful laptop with a vibrant QLED display. Though its battery life isn’t as long as some of the competition, the Flex is still a capable machine with a helpful S Pen onboard.
Design
I gotta give Samsung props. The company has seriously stepped up the design of its laptops over the past few years. The Galaxy Book S and Galaxy Chromebook both had super sleek builds, and the Flex is no different. This thing is all clean lines and sharp corners, with gleaming edges that give it a polished look. The deep royal blue of my review unit is refreshing: I’m used to boring silver, black or gray laptops, so my eyes welcome this change.
Though it’s impressively thin, the Flex feels solid and dense. It’s still pretty light though at just 1.15kg (2.53 pounds). In comparison, the new MacBook Air and the Dell XPS 13 2-in-1 are both heavier at 2.8 and 2.9 pounds, respectively.
The Flex’s 13.3-inch screen is surrounded by minimal bezels on the top, left and right sides, with a fatter chin at the bottom. That thicker bottom bezel is still common in laptops, though Dell managed to do away with it in this year’s XPS 13 -- maybe the rest of the industry will catch up soon. Samsung still managed to squeeze a webcam above the Flex’s screen, although it sadly doesn’t support Windows Hello logins.
Along the edges, you’ll find a headphone jack, a microSD card reader, a push-to-release S Pen slot and three USB-C ports (two of which are Thunderbolt 3 compatible). That’s one more USB-C socket than the XPS and MacBook have, with the Apple laptop lacking the card slot the other two offer. As a consequence of offering more connectivity options, the Flex is also slightly thicker than both of those laptops. Still, it’s compact enough to fit in most of my work bags.
Display
I’ll be honest: I can’t tell the difference between an OLED and QLED screen of the same size and resolution. And you might not be able to either. That is to say, you won’t have any complaints about the Flex’s full HD display in terms of color reproduction and vibrance. I watched several episodes of Amazon’s new show Upload. The rich, autumnal colors in the leafy forest settings were stunning, and it was easy to make out details in even dimly lit scenes.
My apartment gets a lot of light, so it can sometimes get hard to see any laptop’s screen. The Flex’s Outdoor Mode is supposed to combat that by bumping the brightness an extra 200 nits to 600 nits. But switching it on via the keyboard shortcut only helped a little bit. However, it was much more useful on a slightly gloomy day.
I wish Samsung hadn’t made this a mode that you switch on or off and instead offered it as a few extra levels on the built-in brightness scale. Sometimes I needed a little bit more than the built-in max, but in those situations I had to use Outdoor Mode, which scorched my retinas. Still, it probably conserves battery to only bump up brightness by that much for short periods of time rather than encourage you to use it for an extended stretch.
Outdoor Mode also added a weird yellow cast to the screen, as if it turned on a blue-light filter. This wasn’t a big deal other than when I needed to edit photos for color temperatures. In those cases, you’ll have to turn off Outdoor Mode and figure out some other way to see your display without that brightness boost.
Keyboard and trackpad
Because it’s so slim, I was expecting the Book Flex to have shallow keys, but Samsung was able to offer a surprisingly deep amount of travel here. The keyboard is well laid out, with no undersized buttons save for the right-shift key, which is a little less wide to make room for the fingerprint sensor next to it.
My one gripe would be that the left-shift key seemed a little sticky, and too often I would try to uppercase the first letter of a word and end up getting the first two letters instead. I’d blame it on my lazy pinkie, but I haven’t encountered this on any other laptop.
A quick note on the fingerprint scanner: It was fast and accurate, usually unlocking the laptop without delay. Since it’s Windows Hello-compatible, I also used it for authentication on programs like Google Chrome. While the placement is uncommon -- most laptops place their fingerprint readers on the top right of the deck or embed it into the power button -- Samsung’s choice didn’t feel too odd.
Below the spacebar sits the Flex’s trackpad, which is fairly roomy given the laptop’s small footprint. Smaller notebooks often have cramped trackpads, but the surface area here is generous. It’s not as tall as the XPS 13’s or MacBook Air’s, but it offers enough vertical space. Aside from being large, the Flex’s touchpad is also responsive and smooth, and gestures like scrolling or pinch-to-zoom worked well.
S Pen
One of my favorite things about Samsung laptops is the S Pen. It’s not only fun for drawing self-portraits or scribbling down notes but also helpful for signing urgent PDFs. I just had to open the document, select “Add a Note” from the toolbar and sign on the dotted line.
The Flex offers the same onboard stylus as the Note 10 -- don’t expect the bigger, more pen-like version on the Galaxy Tab S series. Still, it’s comfortable enough to use for hours while drawing a self-portrait. Fans of Samsung’s Air Command menu on its Note phones will be pleased to know it pops up here too when you slide the S Pen out of its slot.
Performance and in use
The Flex may look like a dainty machine, but it actually packs a powerful 10th-generation Intel Core i7 processor. My review unit came with 16GB of RAM, which is double the 8GB configuration you can buy in the US. (The 15-inch model has the same CPU but 12GB of RAM.) Bear in mind that this means my experience, at least when it comes to performance and speed, isn’t going to be representative of what you can expect.
With these guts, the Flex deftly dealt with my daily workflow of Slack, dozens of Chrome tabs, spreadsheets and the inescapable Zoom calls that permeate our lives now. The laptop also kept up with my new at-home needs, like executing my podcast-recording setup and uploading large files for review videos. I also played several rounds of League of Legends after rediscovering it, and the Flex never let me down.
That is, until I decided to turn on Samsung’s Silent mode. This is meant to keep the laptop’s fans quiet if you’re bothered by them. It’s pretty easy to activate: Just slide a switch in the Samsung Settings app. You don’t have to go into Boot mode to access it. I never found the Flex too loud, but I guess those trying to get work done in bed next to a light sleeper or just really hate white noise might.
I did notice a significant decrease in sound when I enabled Silent Mode, though it seemed to throttle CPU performance. When I tried to continue playing League afterward, the system lag made it nearly impossible: My character could barely make it to the first turret before I gave up and restarted (with Silent Mode off). This is somewhat understandable. You can’t expect high-speed performance when the fans are turned off without the computer running too hot. Silent Mode is more useful if you’re bothered by the noise and aren’t running anything intensive.
Battery life and wireless PowerShare
Thanks to the Flex’s QLED screen and 69.7Whr battery, Samsung promises up to 20 hours of runtime. In reality though, that number is a lot lower. Our video-looping battery test drained the Flex in about 13 and a half hours, which is better than the MacBook Air but falls short of the XPS 13 2-in-1’s 14-and-a-half hour mark. In real-world experience though, I saw power levels plunge below 20 percent after five hours of heavy use. To be fair, that involved energy-sapping processes like video playback and more-frenzied League games.
One of the new features of the Flex is its trackpad’s built-in wireless charger that can deliver power to Qi-compatible devices. It’s the same Wireless PowerShare feature that debuted on the Galaxy S10. You can’t use the trackpad while you’re charging something, which makes sense, since something is literally obstructing the usable area. I’m not sure how helpful this feature is, since wireless charging is typically too slow to be meaningful. I placed the Galaxy S20 Ultra on the trackpad (after the requisite first step of turning on the feature in Settings) and watched as its battery level climbed painfully slowly from 79 percent to 85 percent in 20 minutes.
For something smaller, like the Galaxy Buds, I could see this being useful in a pinch (say, in an airplane when you’re running low on juice). Otherwise, wireless PowerShare isn’t something I’ll use a lot.
Wrap-up
I’m enamored with the Galaxy Book Flex. It’s a pretty machine with a nice display, powerful guts and useful S Pen. But I wish there were more configurations (for each size) to make the base price lower. Sure, $1,349 for a Core i7 processor and 8GB of RAM isn’t the priciest, considering the new MacBook Air tops out at $1,199 for a Core i5 and the similarly specced XPS 13 2-in-1 costs a lot more at $1,700. But with the competition offering options starting at $999, that’s tough to swallow.
Still, if you’re looking for a gorgeous, beefy laptop with a good screen and don’t need it to last forever, the Galaxy Book Flex is worth considering. More important, it proves it’s time to take Samsung’s laptops seriously again.
0 notes
mealha · 5 years
Text
iPad Pro Review: Closer than ever to replacing your laptop
Apps like Adobe's Lightroom still run smoothly on the iPad Pro. (Stan Horaczek /)
I clicked into this window on the new iPad Pro using Apple’s Magic Trackpad. It seems like a small thing–I do it just about every day on a MacBook Pro. But, this isn’t a computer. Or it is a computer. Regardless of how Apple does or doesn’t market their tablet, I’ve been using the new iPad Pro as my main work machine for about a week now. And it’s closer to replacing a laptop than it ever has been.
From a hardware standpoint, the new iPad Pro isn’t wildly different from the previous version that debuted back in 2018. The processor has moved up from the A12X Bionic to the A12Z Bionic. As those cryptic but incremental names suggest, this isn’t a revolution in terms of silicon. In reality, though, it didn’t need much of a speed bump anyway. On the new iPad Pro, apps load quickly, intensive programs like Lightroom feel snappy, and it generally seems like it can handle whatever task you ask of it.
Mouse support
When it comes to usability, however, it’s the software that really changes the way you interact with the iPad Pro. The operating system—called iPadOS 13.4—now integrates full mouse support in the form of a shape-shifting cursor. The Magic Keyboard with an integrated touchpad is coming down the road, but for this review, I synced Apple’s Magic Touchpad meant for typical computers. Tap on the touchpad and a small circle appears to show you where you’re pointing. Get close enough to an icon or a button, and the cursor snaps over the object and takes on its shape.
It’s simple and natural when it works, which it does most of the time. There are still several places–including Safari–in which the new system is not totally integrated, but as it gets wider adoption, I can see myself getting extremely used to it.
The iPad Pro and Pencil 2 are still a very excellent pair. (Stan Horaczek /)
If you’re dealing with a lot of text, this really does make things a lot better. Some people have developed surgical accuracy when it comes to tapping in exactly the right spot to place a cursor in a document. I’m not one of them and, thankfully, the touchpad support has made selecting and manipulating text simpler.
The touchpad also helps with functions such as sliders and cropping in Lightroom. I’ll often release a slider or the corner of a cropping tool only to have it move slightly during separation. I didn’t get that with the touchpad, which translated into fewer instances of wanting to throw it across the room.
For now, the cursor is relegated to external input devices. You can hook up a mouse or a trackpad to take advantage of it. But, we won’t get the full effect until next month when Apple rolls out the swanky, $350 (for the 12.9-inch model) Magic Keyboard. It has an integrated touchpad in exactly the spot where I’ve accidentally tapped my desk expecting a touchpad when using the regular Smart Keyboard Folio.
In addition to its shape hanging tricks, the new mouse functionality also adds gesture control closer to what you’d find on a Mac laptop. Swipe up with three fingers to go home and swipe right with three fingers to switch between apps. Of course, you also get the two-finger natural scrolling, which Apple does far better than anyone at the moment.
LIDAR
There is, however, one major hardware update worth talking about: the new LIDAR sensor that lives inside the iPad Pro’s upgraded camera module. LIDAR stands for Light Detection and Ranging, and it typically helps self-driving cars or rovers meant for other planets make 3D maps of their surroundings by sending out light and measuring how it bounces back. The iPad is doing something similar by creating intricate maps of the objects around it within a five-meter distance. It can then use that data to inform augmented-reality apps through ARKit 3.
Right now, there aren’t a ton of ways to really experience how much better AR is with the LIDAR sensor onboard the device. The native Measure app, however, takes full advantage of it. For the unfamiliar, Measure uses augmented reality to give you real world dimensions of physical objects, like a virtual ruler. It works...OK. It’s not something I’d trust for extremely accurate tasks, and that’s still true with the LIDAR-equipped version. It has, however, gotten a lot better and faster. The measurements seem more reliable, and pop up more quickly.
Apple says that AR app developers should notice an improvement in the performance of their apps without having to totally recode their software. But, there will be opportunities to specifically take advantage of some LIDAR functions.
The Magic Trackpad will eventually give way to the upcoming Magic Keyboard, which has a built-in trackpad and arrives in May. (Stan Horaczek /)
The rest of the features
Again, not a ton has changed here from a hardware standpoint, so if you’re familiar with the 2018 iPad Pro, picking this up will feel very similar.
Apple has revamped the microphones, which is a great addition for a Pro device. We also saw the company put a lot of emphasis on built-in mics for the latest MacBook Pro as well.
The battery truly does last all day, at least in the larger 12.9-inch model I’ve been trying out. I’ve put roughly 9 hours in on this device today and it still has 10 percent battery life despite some work in Lightroom.
Apple also says it has made some hardware changes to the thermal performance to keep the components cooler inside, It wasn’t specific about what exactly those changes are, but it stays cool to the touch even hinder heavy load.
Lastly, the screen is still gorgeous. It has a smooth-scrolilng 120 Hz refresh rate. The 240 Hz touch-sensing is extremely responsive. The maximum brightness is literally too bright for indoors and totally usable even outside.
Who should buy it?
This is where we run into the overly complex meaning of the phrase “pro” when it comes to gadgets. To be clear, the 11-inch iPad starts at $799 and the 12.9-inch model begins at $999. If you want the Magic Keyboard down the road, it will cost you an extra $300 or $350 depending on which size you get. And you’re going to want a Pencil 2, which tacks another $100 onto the bill. And while the base storage options have moved up from a paltry 64 GB to 128 GB, going beyond that is going to cost you. Storage of 256 GB costs an extra $100 and 512 GB will set you back an extra $300 over the base model. Going all the way up to 1 TB pushes the 12.9-inch model up to $1,499. That’s a lot of numbers to digest, but it’s simple to distill: It’s as expensive as a good laptop.
For pro creatives, it’s very useful as a secondary tool. As a photographer, I love it for editing individual photos, doing Lightroom sorts without having to worry about my laptop burning through its battery and, most importantly, showing people photos on the awesome screen. But, as a main workflow device, it’s not quite there yet because exporting and uploading images to various fulfillment sites is still tedious and unreliable.
For videography, it’s great for even some complex projects, but there’s still no Final Cut to help fully make the jump.
If you have a 2018 or later iPad pro, it’s not necessary to run out and upgrade right now unless you plan to get heavy into augmented reality. If you’re just answering emails, typing into docs, and watching Netflix, the Pro is a more-than-capable replacement for a laptop, but then again, so are the other, cheaper iPad models.
Ultimately, I was glad to get back to my laptop for my regular workdays, but the iPad feels more viable than ever as a complete replacement. I expect that will be even more applicable once the Smart Keyboard case lands in May.
from Popular Photography | RSS https://ift.tt/2QMuRkP
0 notes
kristablogs · 5 years
Text
iPad Pro Review: Closer than ever to replacing your laptop
Apps like Adobe's Lightroom still run smoothly on the iPad Pro. (Stan Horaczek /)
I clicked into this window on the new iPad Pro using Apple’s Magic Trackpad. It seems like a small thing–I do it just about every day on a MacBook Pro. But, this isn’t a computer. Or it is a computer. Regardless of how Apple does or doesn’t market their tablet, I’ve been using the new iPad Pro as my main work machine for about a week now. And it’s closer to replacing a laptop than it ever has been.
From a hardware standpoint, the new iPad Pro isn’t wildly different from the previous version that debuted back in 2018. The processor has moved up from the A12X Bionic to the A12Z Bionic. As those cryptic but incremental names suggest, this isn’t a revolution in terms of silicon. In reality, though, it didn’t need much of a speed bump anyway. On the new iPad Pro, apps load quickly, intensive programs like Lightroom feel snappy, and it generally seems like it can handle whatever task you ask of it.
Mouse support
When it comes to usability, however, it’s the software that really changes the way you interact with the iPad Pro. The operating system—called iPadOS 13.4—now integrates full mouse support in the form of a shape-shifting cursor. The Magic Keyboard with an integrated touchpad is coming down the road, but for this review, I synced Apple’s Magic Touchpad meant for typical computers. Tap on the touchpad and a small circle appears to show you where you’re pointing. Get close enough to an icon or a button, and the cursor snaps over the object and takes on its shape.
It’s simple and natural when it works, which it does most of the time. There are still several places–including Safari–in which the new system is not totally integrated, but as it gets wider adoption, I can see myself getting extremely used to it.
The iPad Pro and Pencil 2 are still a very excellent pair. (Stan Horaczek /)
If you’re dealing with a lot of text, this really does make things a lot better. Some people have developed surgical accuracy when it comes to tapping in exactly the right spot to place a cursor in a document. I’m not one of them and, thankfully, the touchpad support has made selecting and manipulating text simpler.
The touchpad also helps with functions such as sliders and cropping in Lightroom. I’ll often release a slider or the corner of a cropping tool only to have it move slightly during separation. I didn’t get that with the touchpad, which translated into fewer instances of wanting to throw it across the room.
For now, the cursor is relegated to external input devices. You can hook up a mouse or a trackpad to take advantage of it. But, we won’t get the full effect until next month when Apple rolls out the swanky, $350 (for the 12.9-inch model) Magic Keyboard. It has an integrated touchpad in exactly the spot where I’ve accidentally tapped my desk expecting a touchpad when using the regular Smart Keyboard Folio.
In addition to its shape hanging tricks, the new mouse functionality also adds gesture control closer to what you’d find on a Mac laptop. Swipe up with three fingers to go home and swipe right with three fingers to switch between apps. Of course, you also get the two-finger natural scrolling, which Apple does far better than anyone at the moment.
LIDAR
There is, however, one major hardware update worth talking about: the new LIDAR sensor that lives inside the iPad Pro’s upgraded camera module. LIDAR stands for Light Detection and Ranging, and it typically helps self-driving cars or rovers meant for other planets make 3D maps of their surroundings by sending out light and measuring how it bounces back. The iPad is doing something similar by creating intricate maps of the objects around it within a five-meter distance. It can then use that data to inform augmented-reality apps through ARKit 3.
Right now, there aren’t a ton of ways to really experience how much better AR is with the LIDAR sensor onboard the device. The native Measure app, however, takes full advantage of it. For the unfamiliar, Measure uses augmented reality to give you real world dimensions of physical objects, like a virtual ruler. It works...OK. It’s not something I’d trust for extremely accurate tasks, and that’s still true with the LIDAR-equipped version. It has, however, gotten a lot better and faster. The measurements seem more reliable, and pop up more quickly.
Apple says that AR app developers should notice an improvement in the performance of their apps without having to totally recode their software. But, there will be opportunities to specifically take advantage of some LIDAR functions.
The Magic Trackpad will eventually give way to the upcoming Magic Keyboard, which has a built-in trackpad and arrives in May. (Stan Horaczek /)
The rest of the features
Again, not a ton has changed here from a hardware standpoint, so if you’re familiar with the 2018 iPad Pro, picking this up will feel very similar.
Apple has revamped the microphones, which is a great addition for a Pro device. We also saw the company put a lot of emphasis on built-in mics for the latest MacBook Pro as well.
The battery truly does last all day, at least in the larger 12.9-inch model I’ve been trying out. I’ve put roughly 9 hours in on this device today and it still has 10 percent battery life despite some work in Lightroom.
Apple also says it has made some hardware changes to the thermal performance to keep the components cooler inside, It wasn’t specific about what exactly those changes are, but it stays cool to the touch even hinder heavy load.
Lastly, the screen is still gorgeous. It has a smooth-scrolilng 120 Hz refresh rate. The 240 Hz touch-sensing is extremely responsive. The maximum brightness is literally too bright for indoors and totally usable even outside.
Who should buy it?
This is where we run into the overly complex meaning of the phrase “pro” when it comes to gadgets. To be clear, the 11-inch iPad starts at $799 and the 12.9-inch model begins at $999. If you want the Magic Keyboard down the road, it will cost you an extra $300 or $350 depending on which size you get. And you’re going to want a Pencil 2, which tacks another $100 onto the bill. And while the base storage options have moved up from a paltry 64 GB to 128 GB, going beyond that is going to cost you. Storage of 256 GB costs an extra $100 and 512 GB will set you back an extra $300 over the base model. Going all the way up to 1 TB pushes the 12.9-inch model up to $1,499. That’s a lot of numbers to digest, but it’s simple to distill: It’s as expensive as a good laptop.
For pro creatives, it’s very useful as a secondary tool. As a photographer, I love it for editing individual photos, doing Lightroom sorts without having to worry about my laptop burning through its battery and, most importantly, showing people photos on the awesome screen. But, as a main workflow device, it’s not quite there yet because exporting and uploading images to various fulfillment sites is still tedious and unreliable.
For videography, it’s great for even some complex projects, but there’s still no Final Cut to help fully make the jump.
If you have a 2018 or later iPad pro, it’s not necessary to run out and upgrade right now unless you plan to get heavy into augmented reality. If you’re just answering emails, typing into docs, and watching Netflix, the Pro is a more-than-capable replacement for a laptop, but then again, so are the other, cheaper iPad models.
Ultimately, I was glad to get back to my laptop for my regular workdays, but the iPad feels more viable than ever as a complete replacement. I expect that will be even more applicable once the Smart Keyboard case lands in May.
0 notes
scootoaster · 5 years
Text
iPad Pro Review: Closer than ever to replacing your laptop
Apps like Adobe's Lightroom still run smoothly on the iPad Pro. (Stan Horaczek /)
I clicked into this window on the new iPad Pro using Apple’s Magic Trackpad. It seems like a small thing–I do it just about every day on a MacBook Pro. But, this isn’t a computer. Or it is a computer. Regardless of how Apple does or doesn’t market their tablet, I’ve been using the new iPad Pro as my main work machine for about a week now. And it’s closer to replacing a laptop than it ever has been.
From a hardware standpoint, the new iPad Pro isn’t wildly different from the previous version that debuted back in 2018. The processor has moved up from the A12X Bionic to the A12Z Bionic. As those cryptic but incremental names suggest, this isn’t a revolution in terms of silicon. In reality, though, it didn’t need much of a speed bump anyway. On the new iPad Pro, apps load quickly, intensive programs like Lightroom feel snappy, and it generally seems like it can handle whatever task you ask of it.
Mouse support
When it comes to usability, however, it’s the software that really changes the way you interact with the iPad Pro. The operating system—called iPadOS 13.4—now integrates full mouse support in the form of a shape-shifting cursor. The Magic Keyboard with an integrated touchpad is coming down the road, but for this review, I synced Apple’s Magic Touchpad meant for typical computers. Tap on the touchpad and a small circle appears to show you where you’re pointing. Get close enough to an icon or a button, and the cursor snaps over the object and takes on its shape.
It’s simple and natural when it works, which it does most of the time. There are still several places–including Safari–in which the new system is not totally integrated, but as it gets wider adoption, I can see myself getting extremely used to it.
The iPad Pro and Pencil 2 are still a very excellent pair. (Stan Horaczek /)
If you’re dealing with a lot of text, this really does make things a lot better. Some people have developed surgical accuracy when it comes to tapping in exactly the right spot to place a cursor in a document. I’m not one of them and, thankfully, the touchpad support has made selecting and manipulating text simpler.
The touchpad also helps with functions such as sliders and cropping in Lightroom. I’ll often release a slider or the corner of a cropping tool only to have it move slightly during separation. I didn’t get that with the touchpad, which translated into fewer instances of wanting to throw it across the room.
For now, the cursor is relegated to external input devices. You can hook up a mouse or a trackpad to take advantage of it. But, we won’t get the full effect until next month when Apple rolls out the swanky, $350 (for the 12.9-inch model) Magic Keyboard. It has an integrated touchpad in exactly the spot where I’ve accidentally tapped my desk expecting a touchpad when using the regular Smart Keyboard Folio.
In addition to its shape hanging tricks, the new mouse functionality also adds gesture control closer to what you’d find on a Mac laptop. Swipe up with three fingers to go home and swipe right with three fingers to switch between apps. Of course, you also get the two-finger natural scrolling, which Apple does far better than anyone at the moment.
LIDAR
There is, however, one major hardware update worth talking about: the new LIDAR sensor that lives inside the iPad Pro’s upgraded camera module. LIDAR stands for Light Detection and Ranging, and it typically helps self-driving cars or rovers meant for other planets make 3D maps of their surroundings by sending out light and measuring how it bounces back. The iPad is doing something similar by creating intricate maps of the objects around it within a five-meter distance. It can then use that data to inform augmented-reality apps through ARKit 3.
Right now, there aren’t a ton of ways to really experience how much better AR is with the LIDAR sensor onboard the device. The native Measure app, however, takes full advantage of it. For the unfamiliar, Measure uses augmented reality to give you real world dimensions of physical objects, like a virtual ruler. It works...OK. It’s not something I’d trust for extremely accurate tasks, and that’s still true with the LIDAR-equipped version. It has, however, gotten a lot better and faster. The measurements seem more reliable, and pop up more quickly.
Apple says that AR app developers should notice an improvement in the performance of their apps without having to totally recode their software. But, there will be opportunities to specifically take advantage of some LIDAR functions.
The Magic Trackpad will eventually give way to the upcoming Magic Keyboard, which has a built-in trackpad and arrives in May. (Stan Horaczek /)
The rest of the features
Again, not a ton has changed here from a hardware standpoint, so if you’re familiar with the 2018 iPad Pro, picking this up will feel very similar.
Apple has revamped the microphones, which is a great addition for a Pro device. We also saw the company put a lot of emphasis on built-in mics for the latest MacBook Pro as well.
The battery truly does last all day, at least in the larger 12.9-inch model I’ve been trying out. I’ve put roughly 9 hours in on this device today and it still has 10 percent battery life despite some work in Lightroom.
Apple also says it has made some hardware changes to the thermal performance to keep the components cooler inside, It wasn’t specific about what exactly those changes are, but it stays cool to the touch even hinder heavy load.
Lastly, the screen is still gorgeous. It has a smooth-scrolilng 120 Hz refresh rate. The 240 Hz touch-sensing is extremely responsive. The maximum brightness is literally too bright for indoors and totally usable even outside.
Who should buy it?
This is where we run into the overly complex meaning of the phrase “pro” when it comes to gadgets. To be clear, the 11-inch iPad starts at $799 and the 12.9-inch model begins at $999. If you want the Magic Keyboard down the road, it will cost you an extra $300 or $350 depending on which size you get. And you’re going to want a Pencil 2, which tacks another $100 onto the bill. And while the base storage options have moved up from a paltry 64 GB to 128 GB, going beyond that is going to cost you. Storage of 256 GB costs an extra $100 and 512 GB will set you back an extra $300 over the base model. Going all the way up to 1 TB pushes the 12.9-inch model up to $1,499. That’s a lot of numbers to digest, but it’s simple to distill: It’s as expensive as a good laptop.
For pro creatives, it’s very useful as a secondary tool. As a photographer, I love it for editing individual photos, doing Lightroom sorts without having to worry about my laptop burning through its battery and, most importantly, showing people photos on the awesome screen. But, as a main workflow device, it’s not quite there yet because exporting and uploading images to various fulfillment sites is still tedious and unreliable.
For videography, it’s great for even some complex projects, but there’s still no Final Cut to help fully make the jump.
If you have a 2018 or later iPad pro, it’s not necessary to run out and upgrade right now unless you plan to get heavy into augmented reality. If you’re just answering emails, typing into docs, and watching Netflix, the Pro is a more-than-capable replacement for a laptop, but then again, so are the other, cheaper iPad models.
Ultimately, I was glad to get back to my laptop for my regular workdays, but the iPad feels more viable than ever as a complete replacement. I expect that will be even more applicable once the Smart Keyboard case lands in May.
0 notes
colourmyliving · 6 years
Text
I was going for 65-inch but the missus said it is too big and wanted me to go with a 55-inch instead. 60-inch was a compromise. There aren’t many panels at this size and the 60-inch LG is a mid to high-end panel with plenty of appealing features and a premium look.
Features at a glance
LG Super UHD TV is the pinnacle of LG LED TV
Nano Cell™ technology, for precise colour
Billion Rich Colours – More of the colours you love
Genuine LED innovation and Nano particle precision
The optimal viewing experience from any angle
Multi HDR with Dolby Vision™ unlike other HDRs
Top 8 Reason Why I Chose the LG 60-inch Super UHD TV
Apart from being the right size, not too big that they dominate the room, and not too small that it won’t pass for a big panel display, here are five reasons why I went for the LG SJ810 series. Note that other sizes within the series starts from 49-inch, 55-inch and 65-inch. These are 2017 models so if you can still find them, the price tag would have come down considerably.
1. Amazing Colour:
We have always been very impressed with LG’s panels. When browsing for a suitable LED panel to buy, we had a look at many different screens and we always come back to LG for the sheer brilliance and colour thanks to the Nano Cell Technology. The Nano Cell display offers more precise colour, a wider colour gamut with  advanced 10 bit processing unit to expand the colour spectrum and project billions of colours, shades and hues.
The TV has a billion colour panel offering 64-times broader colour range compared to conventional TV for a more lifelike picture.
2. Deep Black
Black are not as dark as compared to OLED panels but deep enough. We couldn’t fault the OLED display with its 1 to infinity contrast. There is no way an LED panel can match the contrast from self lighting pixel on OLED panels. But you are going to have to pay more for OLED. At the time of writing, OLED panels at equivalent size are still pricey, some more than double the price of LED.
There is no way that the black is going to be as deep as that on the OLED but here is a very well executed LED panel with sufficient black.
3. Outstanding Picture Quality
I couldn’t stress enough this point. The biggest reason why I fell for the LG SJ810 is because of its outstanding picture quality. When compared to other LED panels, LG with its Nano Cell technology just stands out. The technology offers amazing picture quality with uniform colour volume thanks to its consistently placed pixels, one nanometer apart, hence the name Nano Cell technology.
4. Wider Viewing Angle
One of the tests we do when reviewing big screen TV is the viewing angles. We do a sweep from side to side, changing viewing angles to see how the panel performs. When you have all televisions lined up horizontally, it’s even easier to pick up which performs the best. Simply stand at 45-degree to the right or left of all the panels and look at the display from where you are. The brightest panel when viewed from an angle will be your best choice.
LG SJ810 covers this with accurate colour and contrast at any angle achieving 100% color gamut up to 60-degrees off axis, credit to IPS and Nano Cell technology. Cheaper models do not support Nano Cell Technology so you have been warned.
On conventional panels colours also tend to distort when viewing off-centre. Again, LG’s Nano Cell Display enhances off-centre colour consistency and reduces colour distortion caused by viewing angle. The Nano Cell records 100% colour gamut at angles up to 60 degrees off-centre. So it doesn’t matter if you are sat on the right or left couch, 60% of people watch TV off axis, you are assured of rich, deep and wide colour with plenty of contrast.
5. Active HDR with Dolby Vision
We know we are going to be the TV to stream Netflix and Amazon Prime content either 4K or Full HD and wanted more than just broadcast level HDR. LG SJ810 supports Active HDR with scene by scene optimisation and automatic switching between several High Dynamic Range algorithms. These are HLG, HDR10 and Dolby Vision. We knew that going with Samsung and Panasonic panels leave us with only HDR10 and HDR10 Plus apart from HLG. We like knowing that the extra formatting ability, Dolby Vision on the LG SJ810 and there when we stream Netflix content, and it doesn’t let us down.
Netflix supports Dolby Vision
Amazon Prime Video supports HDR10 and HDR10 Plus
Broadcast TV supports HLG
The advantage of Dolby Vision is that every scene contains dynamic metadata for optimised picture quality within every frame. HDR10 and HLG does not support scene by scene dynamic switching. You can find out more about the different High Dynamic Range algorithms in our article How to Choose the Right 4K Ultra HD Large Screen TV.
6. Excellent Quality Audio
The sound department is never an area that big screen televisions are good at. In making the television slim and surrounding bezel as small as possible, the sound system is usually compromised. This is not the case with LG SJ810. Even without a sound bar or external sound system, the LG SJ810 performs exceptionally well without cracking at high volume. For the panel, LG used high quality sound from Harmon/Kardon. The result, an all immersive sound experience enough to carry a theatrical feature without a sound bar or separate active woofer.
If you don’t have the money to buy both the TV and sound system in one go,  the audio system by Harman/Kardon is a blessing, pushing a balanced sound performance with good mid-range and crystal clear conversation.
For even better quality sound, we would still recommend a multipoint theatre sound system or Dolby Atmos capable sound bar. This is after all a super thin TV with limited room and space to push the low range.
7. WebOS and Magic Remote
We are still getting used to the system as there are still many features we have yet to take full advantage of. The smart software supports Netflix, Prime, Now TV, iPlayer and FreeView hub service (also known as Freeview Play), Youtube, Web browsing, TV guide, USB content (music or video) and more. YouTube browsing and viewing is great. The interface doesn’t take up the entire screen but only a sliding launcher bar at the bottom of the display. This lets you quickly access frequently used apps without much effort. The current vision is WebOS 3.5.
WebOS supports direct 4K streaming from Netflix, Amazon and YouTube so you can take advantage of the Ultra HD panel’s capability immediately given the right 4K HDR content.
The Magic Remote combines normal buttons, voice and on-screen pointer navigation. With the buttons, we use mainly the directional and scroll wheel the most to navigate and click on the option we want. Voice search using the remote is a bonus, saves one from typing. The system is smart enough to search across all streaming sources and offers it as a keyword search for the browser as a last resort.
iOS and Android app available should you prefer to type. Smartphone and table apps also let you share your media to the TV. We tried 4K and Full HD video content from iPhone streaming via the app straight to the big screen and it worked really well. The family can relive the moments with videos capture during our holiday together without a computer.
The voice remote with motion control and cursor makes browsing the Smart OS easy. Some users have complained that the remote is confusing and it takes time to understand but we don’t think so. I get it, the disconnect between the motions sensor for pointing and clicking does take a little getting used to. The dedicated Netflix and Amazon button is handy for us as streamers.
8. Cheaper
We mentioned this point before and will make it here again. LEDs are still ultimately cheaper. If you are on a budget but still want a panel that can hold a candle to OLED panel, only the Samsung QLEDs or mid to high end LEDs will do. And, the LG SJ810 is not one to let us down. Considering the price of the LG, when compared to OLED TVs, this really is the sensible choice.
Caveat: We don’t watch live tv anymore so cannot comment on the tuner or broadcast TV quality.
What Could Be Better
Two things we noticed with the panel, when the system is loading WebOS, the smart TV software, with the pointer or cursor moving over the screen, you can’t help but notice a light glow around the cursor and sometimes a long vertical streak. LG has since corrected this with updates using a light grey background in the load screen for Netflix or Amazon Prime. the colour bleed and this is especially prevalent especially when the whole screen or scene is black. Local dimming can be hard to achieve with the aforementioned issue with some areas insufficiently dimmed.
We also picked up juddering with fast motion scenes but this could be lack of frames or video data from the source. We don’t always see this so it doesn’t bother us much. When you do see it, especially where there are sudden panning or movement, you can sense a skip frame or two. Still some games are pretty happy with the result and can voice that even with a high motion game, the response time in game more is close to 15ms.
We have fairly bright living room with natural light during the day from the windows, and soft lighting in the evening, the panel performs well and with the right tweak, these flaws are not noticeable.
One last thing, we picked up a little lag in audio output between the optical output and the on-board Harman Kardon speakers. It is not enough to call out or make the video and audio stream seems out of sync. We just mute the sound from the TV’s onboard speaker and use the external multi-channel surround sound sound. A dedicated AV receiver with Dolby Atmos surround that supports high resolution audio and 4K passthrough will alleviate this issue.
Specifications
Display Display Type LED with IPS Size 60-inch Resolution 3840 x 2160 Audio Audio Output 20W Speaker System 2.0 channel Harman Kardon Yes Surround Sound ULTRA surround Magic Sound Tuning Yes Hi-Fi Audio Yes LG Sound Sync yes DTS Decoder Yes Audio Codec AC3(Dolby Digital), EAC3, HE-AAC, AAC, MP2, MP3, PCM, DTS, DTS-HD, DTS Express, WMA, apt-X Smart TV WebOS 3.5 Magic Remote Included Natural Voice Recognition Yes Universal Control Yes Web Browser Yes Music Player Yes Freeview Play Yes Connectivity HDMI 4 ARC (Audio Return Channel) Yes USB 3 (1 x 3.0) LAN 1 CI Slot 1 RF In 2 (RF, Sat) WiFi Yes Digital Audio Out (optical) Headphone out / Line out 01-Jan WiDi (PC to TV) Miracast (Mobile to TV mirroring) Simplink (HDMI CEC Yes Network File Browser Yes Broadcast System Digital TV Reception (Terrestrial, Cable, Satellite) EPG 8 days Digital Recording Yes Power Energy Saving Mode Yes Energy Efficiency Class A+ Standby Power Consumption 0.5W Dimensions and Weight Size without Stand 1344 x 774 x 64 Size with Stand 1344.x 835 x 313 VESA 300 x 300 Weight (TV) 20.8 kg Weight (TV + Stand) 22.3 kg
LG 60-inch 60SJ810 Super UHD TV: the rare 60″ 4K panel with Dolby Vision I was going for 65-inch but the missus said it is too big and wanted me to go with a 55-inch instead.
0 notes
toptecharena · 6 years
Text
The Acer Swift 7 was introduced at CES 2018 as the reigning “world’s thinnest laptop,” and this year the firm managed to beef up the laptop’s display to 14 inches while maintaining the title. However, it’s clear that the Swift 7 has lost too much to the competition in holding onto the moniker.
For starters, the Swift 7 employs a unique touchpad solution in that it no longer clicks – neither physically nor haptically, like the 12-inch MacBook. This was done to achieve the thinness required to maintain the title.
From there, the laptop uses an aging, fanless Intel Core Y-series processor that sees the laptop fall well behind similarly-priced rivals in performance. What you ultimately get is an absolutely gorgeous Ultrabook with built-in LTE that’s difficult to recommend amongst a sea of far more performant laptops that are nearly just as thin and light.
Price and availability
Acer sells just one configuration of its new Swift 7 laptop that calls for $1,699 (about £1,281, AU$2,275) in both all-black and black-on-gold color schemes. That price gets you everything you see to the right, which includes a fingerprint sensor for biometric Windows Hello login as well as an LTE modem and eSIM built in.
Comparatively, the HP Spectre 13 measures six tenths of an inch thicker than the Acer model, and features one of the latest Intel Core i7 U-series processors, with directly comparable storage and memory amounts, for just $1,399 (about £1,055, AU$1,873) list price. Granted, the laptop’s screen is nearly an entire inch smaller, but the device can be configured with double the memory and storage on offer within the Swift 7 for just another 10 bucks or quid.
Likewise, the Huawei MateBook X Pro is a 14-inch laptop that calls for just $1,499 (about £1,130, AU$2,007) to completely destroy the Acer Swift 7 from a value perspective. This laptop isn’t as thin or light, but it’s not that far off for offering twice as much memory and storage as well as stronger Nvidia GeForce MX150 graphics and a more powerful and more recent Intel processor – oh, and not to mention a far sharper display at 3,000 x 2,000 pixels.
The latest 12-inch MacBook from Apple measures slightly thicker at 0.52 inches, and would cost 50 bucks or quid less to match the Acer laptop on memory and storage and provide a sharper display, though it’s missing biometric login and some screen real estate. 
At this point, the Swift 7 seems to have an awfully specific focus on thinness, mobility and connectivity … perhaps to its own detriment against similarly priced rivals.
Design
Acer has clearly developed the Swift 7 with thinness, lightness and portability in mind. With that, the Swift 7 is a sublime laptop to pick up and hold, measuring just 0.35 inches (8.98mm) thin and weighing a svelte 2.6 pounds (1.18kg).
This laptop is also quite the looker, encased in an all-black, brushed unibody aluminum shell with two sturdy hinges holding the display in place. Acer’s latest Swift 7 is definitely one of the most luxurious-feeling laptops we’ve tested. Even the screen bezels and trackpad are wrapped in chrome bands – and so is the fingerprint sensor on the left beside the Tab key.
Thankfully, the keyboard on this year’s model is backlit and feels fantastic to type on in spite of the incredibly shallow travel afforded to it. Tuning up the feedback force helped immensely here. However, we cannot say it’s the same for the trackpad.
In order to achieve this landmark thinness in laptop design, Acer decided to completely remove the clicking function from the trackpad. This means that you can only tap to click as a means of interacting with the Windows 10 interface.
We could go on for far longer about this flaw than we’re about to, but just know that this omission presents a serious learning curve or leveling of expectations. Even we, as devout tap-to-click fans, find using the laptop to be a bit painful without being able to click at all. Without clicking, moving and resizing windows requires precise double-taps, which quickly becomes bothersome.
Not being able to click also greatly reduces the speed at which we can navigate Windows 10, keeping us from moving the cursor with our index finger and clicking on items with our thumb, like so many laptop users do.
Seriously consider how important the tracking experience on a laptop is to you before deciding to buy this one, because it’s something you’ll be stuck with for the life of the device. It’s frankly enough to turn us off to the thing.
Display and audio
Acer has at least gone great lengths to improve the Swift 7 multimedia experience, but those pursuits have produced new drawbacks of their own. Now, the touchscreen is 14 inches on the diagonal, thanks to far more narrow bezels. 
The IPS screen makes colors absolutely pop and offers up wide viewing angles for sharing content, which could come in handy when pushing the display down 180 degrees. Movies and still photos look vibrant and crisp through the CineCrystal LED display.
However, Acer appears to have been forced to move the webcam to beneath the display in order to reduce the side bezel width. Of course, we’re no less miffed by this on the Acer Swift 7 than we’ve been with that of the Dell XPS 13: centered but beneath the display rather than above it.
We’ve seen Ultrabooks achieve similarly thin bezels with normally positioned webcams, so there’s really little excuse here.
As for the audio performance, it’s unsurprisingly poor coming from such a thin and light laptop. The laptop’s design leaves room for only the smallest audio drivers that fire from the bottom of its base, leaving you with tinny and thin sound in movies and music. Just be grateful that Acer didn’t kill the headphone jack in making the world’s thinnest laptop.
For costing as much as it does, we’re not seeing the performance we’d expect from the Acer Swift 7. The Intel processor inside this machine has two major factors working against it when it comes to performance: that it’s a 7th-generation chip that has been outpaced pretty handily by the 8th generation, and its a Y-series chip – one designed for low-power, fanless devices. 
While there’s nothing wrong with such a processor, the problem is that this laptop’s key competitors in this price range aren’t that much thicker and heavier for using full-blown Intel U-series processors … and are so much more better off for it.
As you can see by the benchmarks, the Swift 7 is outclassed by the Spectre 13 in every performance-based benchmark – and that laptop is merely six-tenths of an inch thicker (and actually a hair lighter). This is largely because the HP laptop uses an 8th-generation, U-series full-fat Intel processor to the Swift 7’s older, lower-power chip.
You can see the same story play out across comparisons, where the Huawei option especially outpaces the Swift 7 with its dedicated graphics. Even the 12-inch MacBook produced similar performance numbers with a weaker Intel m3 processor from the same generation, likely on account of how much more Apple can tune its computer hardware to the software.
By all accounts, the Swift 7 simply does not produce performance that is comparable to rivaling laptops that come with similar price tags or are available for even less. We even see a bit of sluggishness from the laptop when opening ad-filled web pages and when loading large media files.
For being just tenths of an inch thinner than all the rest, the Swift 7 sure does lose out on a lot.
Battery life
That said, we do find the Swift 7 to bring forth some fantastic battery life figures, even if they’re unsurprisingly behind Acer’s own promises. While Acer claims up to 10 hours of use from the laptop, we’ve seen it last a little more than an hour less than that.
You’re likely able to get an entire work day’s worth of use out of this laptop; of course, assuming the tasks involved are all relatively lightweight. Meanwhile, we’ve found the more powerful MateBook X Pro and more popular MacBook to last just as long in our benchmarks – both of which can be had for less than this laptop.
Windows Hello and onboard LTE
Two of the most compelling features about the Swift 7 are its biometric login and cellular connectivity. The biometric login comes via a fingerprint sensor that’s embedded into the keyboard deck left of the Tab key.
Setting up this fingerprint sensor is as simple as it is on all other Windows laptops, and it works admirably. The placement is also easy to appreciate at the time when some brands are still embedding fingerprint sensors into touchpads and other strange spots.
The onboard LTE connection is handled via an Intel modem using an electronic SIM card, or eSIM, which is connected to a global cellular network by Transatel known as Ubigi. Every Swift 7 comes with a 1GB, one-month free trial of the service. After that, you’ll have to sign up for a data plan, with nearly every region – but most of Africa, some of central south Asia, chunks of South America and all of Australia – within its coverage area.
We find the service to be just fine outdoors, but to get rather bogged down within thickly-walled structures, as is the case with most of New York City. Still, the convenience of onboard LTE isn’t lost on us, though we wish we could just sign up with one of the major US carriers we already have a phone plan with for even more convenience.
Final verdict
The Acer Swift 7 is the result of a hellbent mission to make the next “world’s thinnest” laptop. Acer certainly got there and can put that string of words on the box, but what kind of product did it result in? Frankly, one that’s far too easily outpaced and outpriced.
You may have the world’s thinnest laptop if you’re to pick up an Acer Swift 7, but you also have a laptop without a properly working trackpad. You also have a laptop that isn’t as powerful as others that are cheaper, and not that much thicker or heavier, while still looking just as premium.
While we admire Acer’s excellent product design chops brought to bear in the Swift 7, we can’t confidently recommend you buy this laptop unless you must absolutely fulfill your desire to own the thinnest laptop.
Go to Source Author: Acer Swift 7 The Acer Swift 7 was introduced at CES 2018 as the reigning “world’s thinnest laptop,” and this year the firm managed to beef up the laptop’s display to 14 inches while maintaining the title.
0 notes
gta-5-cheats · 6 years
Text
Dell Inspiron 15 5575 Review
New Post has been published on http://secondcovers.com/dell-inspiron-15-5575-review/
Dell Inspiron 15 5575 Review
(adsbygoogle = window.adsbygoogle || []).push();
.wzzqy5b19b617b8a93 margin: 5px; padding: 0px; @media screen and (min-width: 1201px) .wzzqy5b19b617b8a93 display: block; @media screen and (min-width: 993px) and (max-width: 1200px) .wzzqy5b19b617b8a93 display: block; @media screen and (min-width: 769px) and (max-width: 992px) .wzzqy5b19b617b8a93 display: block; @media screen and (min-width: 768px) and (max-width: 768px) .wzzqy5b19b617b8a93 display: block; @media screen and (max-width: 767px) .wzzqy5b19b617b8a93 display: block;
Shortly after Intel announced the first of its 8th gen core CPUs back in August, AMD unveiled its new Ryzen Mobile series. Thanks to both companies increasing core counts across various product lines, we’re seeing big performance improvements over last year’s laptops. Intel’s dominance in this space is being challenged for the first time in years.  
With just four models out right now, these new CPUs are built using AMD’s ‘Zen’ CPU cores and feature integrated graphics based on the recently launched Vega architecture. Intel’s new 8th gen CPUs have proven to have strong compute performance, but integrated graphics has never been the company’s strong suit, especially when it comes to gaming. This is the main area in which AMD hopes to have the upper hand. There are a couple of laptops already in the market with AMD CPUs, and today, we’ll be reviewing one from Dell’s recently launched Inspiron 15 5575 series.
This series is currently available with Ryzen 3 and Ryzen 5 CPUs, and you have various options in terms of colours, display resolutions, and RAM. Our review unit today is the top-end SKU in the series, according to Dell India’s website. It features an AMD Ryzen 5 CPU and is priced at Rs. 50,690. Let’s see if AMD’s technology has the chops to challenge the Intel stronghold.
Dell Inspiron 15 5575 design
The Inspiron 15 5575 is a pretty basic-looking laptop. Dell has given the plastic parts a metallic finish but once you hold it, it’s easy to tell that it’s not real aluminium. The build quality is quite solid though, and the body of the laptop doesn’t flex easily even if you apply pressure. It feels as though it will be durable enough for long-term usage. On its website, Dell advertises a host of different colour options for the 5575, but at the time of this review, our particular variant was only on sale in a Licorice Black trim. The silver unit that we received doesn’t look bad, but it does seem a bit bland.
The lid offers good protection to the 15.6-inch LCD screen. The resolution is decently high at 1920×1080, and the edges of text and icons don’t appear overly jagged. There’s also an anti-glare coating so reflections aren’t much of an issue. However, the panel used here is clearly not IPS quality, and so viewing angles are poor, and colours are dull. The brightness is sufficient, but at full brightness, whites tend to burn out easily when not viewed head-on. You also get a thick, old-fashioned bezel all around the screen, and a HD webcam in the usual spot.
  You get a single hinge in the middle of the display, which also conceals all the vents. This laptop is quite thick, and even with the lid open, the base alone measures 22.7mm in thickness. In order to give it an illusion of slimness, Dell has tapered the sides and front of the laptop a bit.
Connectivity is good, and includes two USB 3.0 and one USB 2.0 ports, HDMI, Ethernet, an SD card slot, a microphone and headphones combo socket, and a DVD writer, which is something we haven’t seen in a long time. There’s only one LED indicator near the power inlet, which glows white when you’re plugged in and amber when the battery is low. What’s missing here is a USB Type-C port.
The palm rest and trackpad are quite spacious, and Dell has managed to fit in a full-sized keyboard complete with a number pad. The chiclet keys are well spaced but aren’t backlit, and we didn’t find the tactile response to be very good either. They’re also quite noisy. The direction keys are nicely separated from the others, and the power button is isolated above the keyboard to avoid accidental presses. However, there’s no fingerprint sensor, even with this top-end configuration. The surface of the trackpad isn’t the smoothest. We found tracking to be a little jumpy at times and gestures don’t always work flawlessly. 
On the bottom, you have four rubber feet to help get some clearance. The stereo speakers are also placed at the bottom, towards the front. There’s no hatch for accessing any of the components and the battery isn’t removable either. Overall, the Inspiron 15 5575 isn’t much to look at. It’s built to be utilitarian, and that seems to be the end of it. This laptop isn’t very light either, at 2.5kg, so carrying it around every day won’t be very pleasant.
Dell Inspiron 15 5575 specifications
As we stated earlier, the Inspiron 15 5575 model that we have is powered by a Ryzen 5 2500U CPU, which features four multi-threaded cores, giving you a total of eight threads. The base clock is set at 2GHz but that can be boosted to 3.6GHz, depending on the task at hand. Graphics duties are handled by the integrated Radeon Vega 8 GPU, which has eight compute units and a base clock of 1,100MHz. It also supports AMD’s FreeSync variable refresh rate scheme, when connected to a compatible monitor.
  The laptop has 8GB of DDR4 RAM running in dual-channel mode, a 1TB (5400rpm) hard drive, dual band 802.11ac Wi-Fi, Bluetooth 4.1, and a 42WHr battery. You get Windows 10 Home preinstalled, along with Microsoft Office 2016 Home and Student Edition and a 30-day trial of McAfee LiveSafe. Dell also bundles its own software like Dell Recovery Environment, Support Assist, and Dell Mobile Connect. The latter lets you sync your phone to your laptop using Bluetooth, so you can receive calls and check SMS messages directly from your laptop, just like we first saw on the Dell XPS 13 9370.
Dell Inspiron 15 5575 performance and battery life
From the moment you power it on, this laptop doesn’t seem very responsive. This impression persisted with us even days after we set it up. This is most likely because of the slow mechanical hard drive being used. There’s an inherent sense of lag when loading apps or even opening new Windows dialogue boxes. Once your programs are loaded though, things are a bit smoother and even multitasking is quick.
Shop On SecondCovers
.mgwve5b19b617b8bef margin: 5px; padding: 0px; @media screen and (min-width: 1201px) .mgwve5b19b617b8bef display: block; @media screen and (min-width: 993px) and (max-width: 1200px) .mgwve5b19b617b8bef display: block; @media screen and (min-width: 769px) and (max-width: 992px) .mgwve5b19b617b8bef display: block; @media screen and (min-width: 768px) and (max-width: 768px) .mgwve5b19b617b8bef display: block; @media screen and (max-width: 767px) .mgwve5b19b617b8bef display: block;
The Inspiron 15 5575 runs slightly warm. After about an hour of streaming video using Chrome on battery power, we found that the bottom and the area behind the keyboard got rather warm. Thankfully, this didn’t spread to the palm rest area, and the keys and other parts of the laptop stayed cool.
In terms of performance, the Rzyen 5 2500U is best compared to Intel’s new Core i5-8250U. The Ryzen 5 pulled ahead of Intel’s offering in some synthetic tests. In 3DMark Fire Strike, the Inspiron 15 5575 scored 1,639 points, which is roughly 600-700 points more than what a Core i5-8250U laptop would typically achieve. In CPU benchmarks, we’ve seen slightly better results from Intel, both for single and multi-threaded tests. Our real-world file compression and video encoding tests showed that the Ryzen 5 doesn’t quite match the speed of Intel’s counterpart, taking up to a whole minute extra when compressing files and encoding videos.
  In games, the integrated Vega8 GPU is an advantage over Intel’s integrated solution. In Rise of The Tomb Raider’s in-built benchmark, we typically get a single-digit with Intel’s Core i5 offerings, with the resolution set to 1080p and the ‘Low’ graphics preset. The Inspiron 15 5575 on the other hand returned 15.7fps with the same settings. The laptop doesn’t get too hot when gaming, which is something we liked. The exhaust fans are audible but they’re not too distracting.
Games downloaded from the Windows store, such as Asphalt 8, ran smoothly. We also tried FarCry 4, which gave us a semi-playable average framerate of 20fps, but only after dropping the resolution to 1600×900 and the graphics to the ‘Low’ preset. We managed to get GTA V running too, and we averaged around 27fps. However, we had to drop the resolution all the way down to 1280×720, with most of the graphics settings either at ’Normal’ or turned off. Finally, we were able to get a smooth 30+ frames per second in DOTA 2 at the native resolution, and with the graphics slider pushed all the way to the right for best quality.
The Dell Inspiron 15 5575 has no trouble playing 4K video files, although it’s hard to truly enjoy media due to the lacklustre display. Audio quality is decent but not great. Even with the MaxxAudioPro enhancement, audio sounds a bit hollow. On the bright side, the volume level gets quite loud.
Battery life is disappointing, as the 3-cell battery only managed to deliver about four hours of runtime on a single charge. This was with light to medium usage, which typically involved using Chrome and watching videos. In Battery Eater Pro, the laptop ran for just 1 hour, 36 minutes, which isn’t great either.
Verdict The Inspiron 15 5575 series is currently Dell’s only lineup with AMD Ryzen CPUs. The onboard Vega 8 GPU performs quite well for an integrated graphics processor, but is still no replacement for a high-end discrete one. The most expensive variant that we reviewed doesn’t quite suit its Rs. 50,690 asking price, as the overall package leaves a lot to be desired. Windows 10 feels laggy, the display has weak viewing angles and dull colours, battery life is disappointing, and this laptop tends to run warm even on battery power.
The Vega 8 graphics do give AMD an edge over Intel’s integrated solution in most 3D games, but if that’s all you’re after, then Acer’s Swift 3 with this very same configuration, looks a lot better at an online price of roughly Rs. 43,000. You lose out on the DVD drive and Ethernet port (which aren’t a huge loss for many people), but in place of that, you do get a Type-C port, backlit keyboard, and a fingerprint sensor.
If you’re looking for better gaming performance but don’t have the budget for a high-end laptop, then something like the Acer Aspire 5 A515-51G with an entry-level discrete GPU is still a better bet at the same price level as the Inspiron 15 5575. When we reviewed it, this model with the Nvidia MX150 GPU featured just 4GB of RAM, which was our main gripe, but it seems like there’s now an 8GB RAM option, available for roughly the same price as the Inspiron 15 5575. 
The Dell Inspiron 15 5575 is built well, but other than this, it’s hard to think of a good enough reason to buy or recommend it.
Price (MRP): Rs. 50,690
Pros
Sturdy body
Competent integrated GPU
Cons
Lacklustre display 
Runs warm 
Weak battery life 
Sluggish performance 
Ratings (Out of 5)
​Design: 3
Display: 3
Performance: 3
Software: 4
Value for Money: 3
Overall: 3
(adsbygoogle = window.adsbygoogle || []).push();
0 notes
Photo
Tumblr media
Best Android phones (February 2018): our picks, plus a giveaway With Android thoroughly dominating the mobile industry, picking the best Android smartphones is almost synonymous with choosing the best smartphones, period. But while Android phones have few real opponents on other platforms, internal competition is incredibly fierce. From sleek devices that impress with premium design, to powerhouses brimming with features, to all-around great devices, and affordable phones that punch above their weight, the Android ecosystem is populated by a staggering variety of attractive phones. See also: Refurbished phone guide | Best Android tablets | Best Android watches But “greatness” is subjective, and sometimes spec sheets and feature lists are not enough to make an idea of how good a phone really is. In this roundup, we’re looking at the absolute best—the Android phones you can’t go wrong with. Editor’s note: We will be updating this list regularly as new devices launch. Samsung Galaxy Note 8 See more Galaxy Note 8 photos After a controversial 2016, Samsung’s Galaxy Note line is back in full force. With top-of-the-line specs, a stunning design, an all-new dual-camera setup, and new software features, the Galaxy Note 8 is the best Android phone you can buy right now. Editor's Pick Best Samsung phones you can buy right now Samsung is without a doubt the biggest name in the Android world, and so if you are considering purchasing a new phone, logic dictates you may be looking to pick up a phone made by the … The Note 8’s near-bezel-less 6.3-inch Quad HD+ Infinity Display with an 18.5:9 aspect ratio is one of our favorite parts about this phone. DisplayMate agrees with us too. It’s big, maybe too big for some people, but at least the company puts that screen to good use. There’s a new App Pairing feature that allows you to open up two favorited apps in multi-window at the same time, and there are a few new S Pen features that will satisfy the stylus users out there. What’s more, the new dual-camera setup on the back performs incredibly well. While picture quality isn’t a huge step up from what we saw on the Galaxy S8 and S8 Plus, the extra 12 MP sensor with OIS allows you to take clear, concise photos and impressive bokeh shots in just about any situation. No, it’s not perfect, but no phone is. Samsung’s fingerprint sensor placement is still super annoying, and this phone is expensive. If those things don’t matter to you though, the Note 8 might be the right phone for you. Check out our full review below! Read more Samsung Galaxy Note 8 review Samsung Galaxy Note 8 specs Samsung Galaxy Note 8 color comparison Samsung Galaxy Note 8 vs the competition Samsung Galaxy Note 8 vs Galaxy S8 quick look Samsung Galaxy Note 8 vs Essential Phone Samsung Galaxy Note 8 vs Galaxy Note Fan Edition quick look Top five new Galaxy Note 8 features Galaxy Note 8 price, release date, and carrier deals Buy now from Amazon Google Pixel 2 See more Google Pixel 2 photos The Pixel 2 is Google’s latest flagship smartphone, and it’s great at just about everything. It doesn’t have as many bells or whistles as the Note 8, but if you’re in the market for a simple Android phone, the Pixel 2 is a great option. This device has a 5-inch OLED 1080p display with a pixel density of 441 ppi. It doesn’t have a fancy 18:9 aspect ratio screen or a bezel-less design, though it does have front-facing speakers above and below the display. Inside, it comes with a speedy Qualcomm Snapdragon 835 processor, along with 4 GB of LPDDR4x RAM. You can buy the phone with either 64 GB or 128 GB of on board storage, but there’s no microSD card to add additional storage. If you like taking photos with your phone, you’ll be very pleased with the Pixel 2’s 12.2 MP camera. Not only has it been named the smartphone camera on the market according to DxOMark, we found the Pixel 2’s main camera to be incredibly impressive in most situations. There’s even a portrait mode on the front and back cameras, even though the phone doesn’t have a dual-camera setup. You might be wondering why we haven’t mentioned the Pixel 2 XL. It’s a great phone—don’t get us wrong—but just know that the XL’s display has had its fair share of issues since launch. The LG-made pOLED 6.0-inch display on the 2 XL shows off a blue tint when the phone is tilted, it’s not tuned to be as vibrant as other OLED panels, and many early adopters have already been noticing burn-in issues. Google says it will continue to combat these issues with software updates, and it’s also extended the warranty to two years. If you want a Pixel phone with a larger battery and can live with a mediocre display, we’d recommend going for the Pixel 2 XL—you’ll be very happy. But if you can live with a smaller battery and a smaller screen (that doesn’t have any issues), we’d recommend the Pixel 2. Read more Google Pixel 2 and Pixel 2 XL review: the way Android is meant to be Google Pixel 2 and Pixel 2 XL specs Google Pixel 2 XL unboxing and first impressions Google Pixel 2 vs Samsung Galaxy Note 8: the flagship battle Google Pixel 2 vs Google Pixel: what’s changed? Google Pixel 2 and Pixel 2 XL specs: Google’s vision for the modern flagship Google Pixel 2 cases: here are some of the best you can buy Google Pixel 2 XL cases: here are some of your best options Buy now from the Google Store Buy now from Verizon LG V30 See more LG V30 photos The V30 is the latest flagship phone from LG, which has been struggling to compete, at least in sales, with its biggest Android rival Samsung. It is also the successor to 2016’s LG V20, which included a secondary 2.1-inch screen on top for showing app shortcuts, media controls, and more. The V30 does away with that secondary display, and instead has one nearly bezel-free 6-inch screen, using LG’s new Plastic OLED (pOLED) panel. LG offers a slide-out menu on the phone’s display (called the Floating Bar) that floats around the screen as a substitute (sort for) for folks who might miss the secondary display. In terms of hardware specs, the LG V30 has everything you would expect from a flagship phone in early 2018. It has the Qualcomm Snapdragon 835 with 4 GB RAM, along with 64 GB of onboard storage, a 3,300 mAh battery, and a IP68 dust and water resistance rating. LG is heavily promoting the advanced camera and photography features on the V30, too. It has a dual rear camera that includes a 16 MP sensor with a f/1.6 aperture, along with a 13 MP sensor with a f/1.9 aperture. The rear camera has a Crystal Clear Lens instead of plastic, which should mean you should get more realistic looking photos with the V30. Video creators should have fun with software features like Cine Video, which allows users to quickly put in video effects, along with Point Zoom, which will let owners zoom into any point in a video. Audiophiles should also be happy with the LG V30, as it has support for Hi-Fi Quad DAC tuned by B&O Play. It is also the first smartphone to support MQA, which is supposed to let users stream high-res audio, but with a smaller file size and no loss in quality. Read more LG V30 review: a photography and videography dream LG V30 specs Five reasons why the LG V30 is better than the Galaxy Note 8 LG V30 vs Galaxy Note 8: camera features What it’s like to film with the LG V30 LG V30 vs G6 quick look: LG has finally hit a groove Inside the LG V30’s new display: POLED vs Samsung’s Super AMOLED LG V30 price, release date, and carrier deals Best LG V30 cases Buy LG V30 Samsung Galaxy S8 and S8 Plus See more Galaxy S8 and S8 Plus photos Following a successful launch of the Galaxy S7 and S7 Edge, Samsung seemed to have a pretty good idea as to what users want in a smartphone. Solid battery life, high-res screens, impressive camera performance and more were all things the company achieved with the 2016 flagships. And while the Note 7 seemed to improve even more in those areas, overheating problems caused the device to enter total recall mode. It should come as no surprise that the Galaxy S8 and S8 Plus feature top-of-the-line specifications, great cameras and an all-new design that’s truly futuristic. This time around, Samsung included a curved screen on both the S8 and S8 Plus, as well as a unique 18.5:9 aspect ratio that allows for a much more comfortable in-hand feel. The company even ditched its famous physical home button and included on-screen navigation keys (finally). Under the hood, these devices come with the Qualcomm Snapdragon 835 processor (or Exynos 8895, depending on the region), 4 GB of RAM, 64 GB of on-board storage, and one of the latest versions of Android, 7.0 Nougat. Samsung even launched a few extra accessories alongside the S8 that you should definitely consider checking out. The new Samsung DeX dock lets you use your S8 as a desktop computer, and there’s also a new Gear 360 camera that allows for shooting video in 4K and live streaming to YouTube. Of course, there’s also a new Galaxy S8-compatible Gear VR headset, complete with a controller for easier navigation. All in all, the Galaxy S8 and S8 Plus are two of the best smartphones that launched in 2017. Read more Samsung Galaxy S8 and S8 Plus review Samsung Galaxy S8 and S8 Plus specs, price, deals, and more What I don’t like about the Galaxy S8 Plus 5 reasons why the Galaxy S8 Plus is my daily driver Samsung Galaxy S8 color comparison Samsung Galaxy S8 vs the competition Best Galaxy S8 Cases / Galaxy S8 Plus cases Hands-on with the new 4K Gear 360 Samsung Gear VR (2017) review Buy Galaxy s8 Buy Galaxy S8 Plus OnePlus 5T See more OnePlus 5T photos The OnePlus 5 was a solid smartphone, but it wasn’t really up to 2017’s standards on the design front. That’s why OnePlus changed things up quite a bit with the OnePlus 5T. The first thing you’ll notice with the 5T is its bit 6.01-inch Full HD+ AMOLED display with an 18:9 aspect ratio. This makes the device look more in line with other 2017 devices, especially because its predecessor came with a more traditional 16:9 screen. The under-the-hood specs are mostly the same as the OnePlus 5’s, but OnePlus decided to switch up the camera setup this time around. Now, in place of the OnePlus 5’s telephoto lens, the 5T sports a secondary 20 MP sensor that uses a fancy new technology called Intelligent Pixel Technology. Basically, it takes better low-light photos than before. The other big change with the 5T is in regards to biometric security. Not only has the fingerprint sensor moved around to the back, you can also unlock the 5T using face recognition. It’s super fast, but we’ve found it to miss a handful of times for some unknown reason. Perhaps the biggest missing feature on the 5T is an IP rating for dust and water resistance, which is a feature pretty much every other flagship device has. If you can get past that, the OnePlus 5T will be a great option for most people— especially because it costs a fraction of the price of most competing smartphones. Read more OnePlus 5T review Best OnePlus 5T cases OnePlus 5T specs OnePlus 5T price, release date, and deals OnePlus 5T vs OnePlus 5: worth the upgrade? OnePlus 5T vs Samsung Galaxy S8 OnePlus 5T vs LG G6 Buy now from OnePlus Huawei Mate 10 Pro See more Huawei Mate 10 Pro photos The Huawei Mate 10 Pro has all the features one would expect from the latest high-end flagship smartphones on the market. It has a 6-inch OLED display with an 18:9 ratio, a Full HD+ resolution of 2,160 x 1,080, and very small of bezels on the top and bottom of the display. Inside, there’s Huawei’s in-house octa-core Kirin 970 processor, along with a dedicated Neural Processing Unit for faster on-board AI processing. Huawei claims we will see more improvements in AI on the Mate 10 Pro in future updates. You can purchase this phone with either 4 GB RAM and 64 GB of storage, or upgrade to 6 GB of RAM and 128 GB of on board storage. There’s no microSD card for adding additional storage, by the way, nor does it have a 3.5 mm headphone jack (the slightly smaller Mate 10 has both). The Mate 10 Pro ships with Android 8.0 Oreo out of the box. The Mate 10 Pro also comes with an IP67 dust- and water-resistance rating, as well as a large 4,000 mAh battery which supports Huawei Supercharge. This allows the Mate 10 Pro to go up to nearly a full charge in about an hour. The Mate 10 Pro also has a fantastic dual-camera setup with a 20 MP monochrome sensor combined with a 12 MP RGB sensor. The phone got a high 97 score for its rear camera features from the image testing DxOMark, which is right up there with the iPhone 8 Plus and Galaxy Note 8. Read more Huawei Mate 10 and Mate 10 Pro review: all about promises Huawei Mate 10 and Mate 10 Pro vs the competition Huawei Mate 10 and Mate 10 Pro pricing and availability Huawei Mate 10 series specs: better, faster, stronger Where to buy the Mate 10 Pro HTC U11 See more HTC U11 photos The HTC 10 was one of our favorite Android phones of 2016, and for good reason. The Taiwanese company absolutely nailed the design of the 10, and it worked hard at scaling back the software to make it feel like the bare-bones Android experience we all know and love. And while it didn’t have a bunch of gimmicky extras, that was okay. The HTC 10 was a solid Android phone that nailed the basics. Now HTC is back with the 10’s successor, the HTC U11. With an eye-catching, glossy design, all-day battery life, and a smooth and snappy software experience, the U11 competes toe-to-toe with the Galaxy S8 and LG G6 when it comes to performance. It also has one of the best smartphone cameras on the market, according to DxOMark. The standout feature on the U11 is something HTC calls Edge Sense. The sides of the phone are pressure sensitive, and this allows you to physically squeeze the phone to activate a specific function or open an app like the camera or web browser. Having to squeeze your phone to make it do something does sound a bit odd, but we’ve really found this feature to come in handy. Before you go out and spend $650 on this baby, there are a few things you should know. For starters, this phone doesn’t have a 3.5 mm headphone jack, meaning you’ll either need to use Bluetooth headphones or carry around the included headphone adapter everywhere you go. Also, while HTC’s Sense is one of our favorite Android skins out there, it is feeling a bit dated at this point. If you can get past those few caveats, though, the U11 will certainly not disappoint. Read more HTC U11 review HTC U11 specs, price, deals, and more HTC U11 Edge Sense: what can it do? HTC U11 vs the competition Best HTC U11 cases Has the HTC U11 already made the U Ultra obsolete? HTC U11 announced: everything you need to know Buy now from Amazon Cast your vote, and participate in our giveaway! There you have it – our picks from the best Android has to offer right now. Out of those listed, which do you feel is the very best? Be sure to sound off in the poll below. Each month we will also be conducting a giveaway, giving our readers a chance to win the phone with the most votes. Winner, January 2018: Samsung Galaxy Note 8 (10,146 total votes) Congratulations to Bram W. (Netherlands), winner of our January 2018 giveaway! Best Android phones (February 2018) international giveaway! Check out our related best lists: Best cheap Android phones Best dual-SIM Android phones Verizon Android phones | Verizon prepaid phones AT&T Android phones | AT&T prepaid phones T-Mobile Android phones | T-Mobile prepaid phones Sprint Android phones Cricket Android phones , via Android Authority http://bit.ly/2GDnFRe
0 notes
robertvasquez763 · 7 years
Text
Traffic Jamming: In the 2019 Audi A8, We Let Automated-Driving Tech Take the Wheel
Our first opportunity to get into the fourth-generation Audi A8 came with an offbeat premise: We’d be heading out into what Audi boasts is the second most traffic-choked area in the whole of Europe, near the German cities of Essen and Düsseldorf—and we’d be doing so with the hope of getting stuck in traffic.
This isn’t a true first drive of the 2019 Audi A8, but it was our first opportunity to gain an understanding of how a new feature in the A8 works on a congested highway. Called Traffic Jam Pilot, it’s been specifically developed for SAE Level 3 automated driving—meaning that the driver no longer has to monitor the surroundings continuously and that the vehicle system will alert the driver when he or she needs to retake control. Audi claims it’s a world first.
There’s a big asterisk next to what we describe, because we weren’t in the driver’s seat. Whether in Germany or back in the United States, for Level 3 automated driving a specially certified test engineer must be ready to take over. In our case, that was Peter Bergmiller, technical project lead for the system.
Part Traffic Hunter, Part Road-Trip Bingo
“I think we might get lucky!” Bergmiller exclaimed excitedly, homing in on a red-colored stretch of motorway on the navigation system’s live-traffic maps and anticipating that there was about a mile-long traffic jam up ahead. Soon we were in a sea of brake lights, and as our speed fell below 37 mph (60 km/h), the dash display showed a vehicle within white markings, signaling that the system was ready to take over. Bergmiller simply pressed the Auto AI button at the far front of the center console and pulled his feet and hands away from the controls. The A8 was driving itself until further notice.
For now, Traffic Jam Pilot only engages if the system can meet a checklist of parameters. It has to to be on a limited-access divided highway; it needs to have a vehicle directly in front and a line of slow-moving vehicles in adjacent lanes; and the system needs to be able to make out lane markings and the edge of the roadway (with a barrier or guardrails, for instance).
The system is designed to keep doing the driving even if it momentarily can’t follow roadway markings, pointed out Bergmiller. That may not be a situation you’d find in Germany. It is more likely in the U.S., where lane markings are “of varying quality, we’ll leave it at that,” he quipped.
“A Level 3 system takes the responsibility of the driving task, so we also had to change the way you use it,” he explained. “It’s no longer like with the classic driver-assist systems where you just activate the system and whenever it can do something, it does.”
Bergmiller made a few quick menu selections on Audi’s MMI and brought up the pan-European TV channel Arte—pointing out that once the system is engaged it’s entirely fine for the driver to watch TV, respond to text messages, or have a face-to-face conversation with a passenger. Under Level 3 conventions, the driver can take his or her focus away from the road, but only to use entertainment or productivity features that are fully integrated with the vehicle’s interface.
Barely a minute goes by before there’s a warning chime and a visual prompt for Bergmiller to retake control—because we’re leaving the autobahn and transitioning to a divided highway with occasional traffic lights.
No Napping in the Driver’s Seat
Strict rules still apply to the driver, who must remain with butt in seat and torso pointing forward. Infrared sensors and a camera study eye movements and head motions, respectively, to assure alertness for when Traffic Jam Pilot makes a transition request to have the driver retake the wheel. Once you’ve been caught snoozing, the system can’t be enabled again until after you stop.
On the handoff back, the driver is expected, ideally, to take over within 10 seconds, in response to a chime and a pulsating red reminder at the edges of the Virtual Cockpit display. Beyond that, the request gets more audibly and visibly urgent as the car turns down the audio system and jerks the seatbelts in a not at all subtle way. Go no-hands past the 20-second mark and the car will follow an escalation strategy, starting to decelerate, activating its hazard lights, and slowing down within the lane of travel, then unlocking the doors and calling for emergency assistance as it rolls to a stop.
Traffic Assist Pilot gets some of the data it uses remotely—such as map data and routing information (it even adapts its driving behavior to local rules)—but it doesn’t rely on over-the-air connections for anything safety critical. All the sensing and decision making is done onboard, with a suite of hardware that includes 12 ultrasonic sensors, four 360-degree cameras, one single-lens front camera, four midrange radar sensors, one long-range radar sensor, and a forward laser scanner.
“Your phone may just crash and restart. This is not an option for the system in a decision to brake or not brake.”
– Peter Bergmiller, Audi
Bergmiller emphasized that the team put a tremendous effort into the platform of the vehicle, so it would be very hard to upgrade items such as actuators and brakes part way through the model’s life cycle. All the controllers and actuators are capable of handling the car at the limit—a requirement that essentially sent the A8 through two separate development paths.
Over an extended lunch stop, we learned more about the processing that gathers all those inputs, makes decisions, controls the vehicle, and even anticipates the future. Each of those sensors has its own strengths and weaknesses (radar is good for seeing two cars ahead, for instance). So inputs are processed individually at the sensor level and then factored together, within a central controller (which Audi calls zFAS), into a complex “sensor fusion” processing to paint a picture of what’s around the vehicle. A second sensor fusion is processed in a different location in the vehicle, with the laser controller, and then the ordered tasks from the two paths are checked. “If one of these two paths says that we have to hit the brakes, this is supercritical, we just do it and go the safe way,” said Bergmiller. “The safer action always wins.”
Thinking Ahead
The A8 we drove might have been badged Audi Intelligence, but here’s where a different kind of AI (artificial intelligence) comes in. One sensor fusion uses a time-triggered processing path—meaning it can disregard sensor inputs that probably aren’t critical at the moment (learning and prioritizing as it goes) in order to cycle through its operations once every 40 milliseconds—while the other one processes through all the sensor inputs.
The system is always, in parallel, predicting the future position of everything for at least four seconds, according to Bergmiller. “So if all the sensors were to go blind, we have this knowledge about the future that we had been predicting right before that, and we can send that information to the brake system,” he said. “Every input from every sensor arrives at the processor with a time stamp from milliseconds in the past, so a certain amount of prediction is necessary in nearly every calculation within automated driving.”
It can’t be emphasized enough that having the driving computers crash, lag, or reboot is absolutely unacceptable. “Your phone may just crash and restart,” said Bergmiller, with a serious expression. “This is not an option for the system in a decision to brake or not brake.”
On the other hand, a concern that does carry over from personal devices—perhaps even more so—is data security. By German privacy law, Audi isn’t allowed to store any personal or vehicle data without a key safety reason behind it. Cars with Traffic Jam Pilot get what’s effectively a black box that can allow retrieval of data only about how the vehicle sees the world and whether or not the driver was in charge at the time of the accident. No images of the driver or the environment are stored, and all data belongs to the driver, who must visit a dealership and sign paperwork to decrypt that data.
Software upgrades are easily done, but, for security reasons, Audi does not want over-the-air updates yet; any updates would be administered at the dealership via a secured and encrypted connection.
Fine Print and Legalese
In the United States, Audi is hoping to keep to a similar privacy standard, but things are likely to be a bit different. As Audi of America put it, the federal government regulates the car, while the states oversee the driver and the observance of traffic laws. Traffic Jam Pilot currently isn’t allowed in any of Audi’s markets without special allowances, such as having this test engineer behind the wheel. Officials are optimistic that in Europe and the U.S. it will soon be allowed—so optimistic that they hope to simply start building the complete hardware set into all A8 models within the next year or so.
Although Audi thinks of Level 4 automated driving—which would always offer to take the wheel in certain situations, such as on the highway or in a parking garage—as being some years off, there’s potential to expand this Level 3 technology. With the current hardware, for the most part, Audi could extend the A8’s Level 3 capability to non-limited-access divided highways or to higher freeway speeds—up to 81 mph, potentially—although Bergmiller hints that would take a predictive window well beyond the current four seconds, plus more rigorous expectations in handing control back to the driver. Lane-change functionality could be another forward step.
Eventually we got a short stint with Traffic Jam Pilot (and the A8) from the driver’s seat—on a closed course, with a chase car and some special programming—where we found the system resilient and flexible in the way it takes control, as well as far smoother in its inputs than current lane-keeping aids. We were invited to try other things that might also prompt the driver for a takeover. Manually dialing up a downshift did it, for instance, as did suddenly opening the driver’s door a couple of inches.
2019 Audi A8: Flagship Floats on an Active Suspension
Audi: Full Autonomy Still 10 Years Away
Audi A8: Review, News, Photos, Full Info
By then it was well into the early afternoon, and we headed back out with Bergmiller, who suspected that drivers getting a jump on the weekend were creating a nice highway traffic jam. Within minutes, we did indeed find a couple of waves of congestion that were just slow enough to engage the system again; and by then, accustomed to the alerts, we could experience how confident it is in its inputs and how straightforward the handoffs back to the driver are.
After a quick farewell to Bergmiller and the Audi team, we’re handed the key to a Q7—with no Traffic Jam Pilot, of course—and headed back out on the autobahn, where we soon hit some more slowdowns, a reminder of how exhausting driving in gridlock can be.
from remotecar http://feedproxy.google.com/~r/caranddriver/blog/~3/zipNBL0dlYM/
via WordPress https://robertvasquez123.wordpress.com/2017/09/26/traffic-jamming-in-the-2019-audi-a8-we-let-automated-driving-tech-take-the-wheel/
0 notes
jesusvasser · 7 years
Text
Traffic Jamming: In the 2019 Audi A8, We Let Automated-Driving Tech Take the Wheel
-
Our first opportunity to get into the fourth-generation Audi A8 came with an offbeat premise: We’d be heading out into what Audi boasts is the second most traffic-choked area in the whole of Europe, near the German cities of Essen and Düsseldorf—and we’d be doing so with the hope of getting stuck in traffic.
-
This isn’t a true first drive of the 2019 Audi A8, but it was our first opportunity to gain an understanding of how a new feature in the A8 works on a congested highway. Called Traffic Jam Pilot, it’s been specifically developed for SAE Level 3 automated driving—meaning that the driver no longer has to monitor the surroundings continuously and that the vehicle system will alert the driver when he or she needs to retake control. Audi claims it’s a world first.
-
-
There’s a big asterisk next to what we describe, because we weren’t in the driver’s seat. Whether in Germany or back in the United States, for Level 3 automated driving a specially certified test engineer must be ready to take over. In our case, that was Peter Bergmiller, technical project lead for the system.
-
Part Traffic Hunter, Part Road-Trip Bingo
-
“I think we might get lucky!” Bergmiller exclaimed excitedly, homing in on a red-colored stretch of motorway on the navigation system’s live-traffic maps and anticipating that there was about a mile-long traffic jam up ahead. Soon we were in a sea of brake lights, and as our speed fell below 37 mph (60 km/h), the dash display showed a vehicle within white markings, signaling that the system was ready to take over. Bergmiller simply pressed the Auto AI button at the far front of the center console and pulled his feet and hands away from the controls. The A8 was driving itself until further notice.
-
For now, Traffic Jam Pilot only engages if the system can meet a checklist of parameters. It has to to be on a limited-access divided highway; it needs to have a vehicle directly in front and a line of slow-moving vehicles in adjacent lanes; and the system needs to be able to make out lane markings and the edge of the roadway (with a barrier or guardrails, for instance).
-
-
The system is designed to keep doing the driving even if it momentarily can’t follow roadway markings, pointed out Bergmiller. That may not be a situation you’d find in Germany. It is more likely in the U.S., where lane markings are “of varying quality, we’ll leave it at that,” he quipped.
-
“A Level 3 system takes the responsibility of the driving task, so we also had to change the way you use it,” he explained. “It’s no longer like with the classic driver-assist systems where you just activate the system and whenever it can do something, it does.”
-
Bergmiller made a few quick menu selections on Audi’s MMI and brought up the pan-European TV channel Arte—pointing out that once the system is engaged it’s entirely fine for the driver to watch TV, respond to text messages, or have a face-to-face conversation with a passenger. Under Level 3 conventions, the driver can take his or her focus away from the road, but only to use entertainment or productivity features that are fully integrated with the vehicle’s interface.
-
-
Barely a minute goes by before there’s a warning chime and a visual prompt for Bergmiller to retake control—because we’re leaving the autobahn and transitioning to a divided highway with occasional traffic lights.
-
No Napping in the Driver’s Seat
-
Strict rules still apply to the driver, who must remain with butt in seat and torso pointing forward. Infrared sensors and a camera study eye movements and head motions, respectively, to assure alertness for when Traffic Jam Pilot makes a transition request to have the driver retake the wheel. Once you’ve been caught snoozing, the system can’t be enabled again until after you stop.
-
On the handoff back, the driver is expected, ideally, to take over within 10 seconds, in response to a chime and a pulsating red reminder at the edges of the Virtual Cockpit display. Beyond that, the request gets more audibly and visibly urgent as the car turns down the audio system and jerks the seatbelts in a not at all subtle way. Go no-hands past the 20-second mark and the car will follow an escalation strategy, starting to decelerate, activating its hazard lights, and slowing down within the lane of travel, then unlocking the doors and calling for emergency assistance as it rolls to a stop.
-
Traffic Assist Pilot gets some of the data it uses remotely—such as map data and routing information (it even adapts its driving behavior to local rules)—but it doesn’t rely on over-the-air connections for anything safety critical. All the sensing and decision making is done onboard, with a suite of hardware that includes 12 ultrasonic sensors, four 360-degree cameras, one single-lens front camera, four midrange radar sensors, one long-range radar sensor, and a forward laser scanner.
-
-
-
“Your phone may just crash and restart. This is not an option for the system in a decision to brake or not brake.”
-
– Peter Bergmiller, Audi
-
-
-
Bergmiller emphasized that the team put a tremendous effort into the platform of the vehicle, so it would be very hard to upgrade items such as actuators and brakes part way through the model’s life cycle. All the controllers and actuators are capable of handling the car at the limit—a requirement that essentially sent the A8 through two separate development paths.
-
-
Over an extended lunch stop, we learned more about the processing that gathers all those inputs, makes decisions, controls the vehicle, and even anticipates the future. Each of those sensors has its own strengths and weaknesses (radar is good for seeing two cars ahead, for instance). So inputs are processed individually at the sensor level and then factored together, within a central controller (which Audi calls zFAS), into a complex “sensor fusion” processing to paint a picture of what’s around the vehicle. A second sensor fusion is processed in a different location in the vehicle, with the laser controller, and then the ordered tasks from the two paths are checked. “If one of these two paths says that we have to hit the brakes, this is supercritical, we just do it and go the safe way,” said Bergmiller. “The safer action always wins.”
-
Thinking Ahead
-
The A8 we drove might have been badged Audi Intelligence, but here’s where a different kind of AI (artificial intelligence) comes in. One sensor fusion uses a time-triggered processing path—meaning it can disregard sensor inputs that probably aren’t critical at the moment (learning and prioritizing as it goes) in order to cycle through its operations once every 40 milliseconds—while the other one processes through all the sensor inputs.
-
The system is always, in parallel, predicting the future position of everything for at least four seconds, according to Bergmiller. “So if all the sensors were to go blind, we have this knowledge about the future that we had been predicting right before that, and we can send that information to the brake system,” he said. “Every input from every sensor arrives at the processor with a time stamp from milliseconds in the past, so a certain amount of prediction is necessary in nearly every calculation within automated driving.”
-
-
It can’t be emphasized enough that having the driving computers crash, lag, or reboot is absolutely unacceptable. “Your phone may just crash and restart,” said Bergmiller, with a serious expression. “This is not an option for the system in a decision to brake or not brake.”
-
On the other hand, a concern that does carry over from personal devices—perhaps even more so—is data security. By German privacy law, Audi isn’t allowed to store any personal or vehicle data without a key safety reason behind it. Cars with Traffic Jam Pilot get what’s effectively a black box that can allow retrieval of data only about how the vehicle sees the world and whether or not the driver was in charge at the time of the accident. No images of the driver or the environment are stored, and all data belongs to the driver, who must visit a dealership and sign paperwork to decrypt that data.
-
Software upgrades are easily done, but, for security reasons, Audi does not want over-the-air updates yet; any updates would be administered at the dealership via a secured and encrypted connection.
-
Fine Print and Legalese
-
In the United States, Audi is hoping to keep to a similar privacy standard, but things are likely to be a bit different. As Audi of America put it, the federal government regulates the car, while the states oversee the driver and the observance of traffic laws. Traffic Jam Pilot currently isn’t allowed in any of Audi’s markets without special allowances, such as having this test engineer behind the wheel. Officials are optimistic that in Europe and the U.S. it will soon be allowed—so optimistic that they hope to simply start building the complete hardware set into all A8 models within the next year or so.
-
-
Although Audi thinks of Level 4 automated driving—which would always offer to take the wheel in certain situations, such as on the highway or in a parking garage—as being some years off, there’s potential to expand this Level 3 technology. With the current hardware, for the most part, Audi could extend the A8’s Level 3 capability to non-limited-access divided highways or to higher freeway speeds—up to 81 mph, potentially—although Bergmiller hints that would take a predictive window well beyond the current four seconds, plus more rigorous expectations in handing control back to the driver. Lane-change functionality could be another forward step.
-
Eventually we got a short stint with Traffic Jam Pilot (and the A8) from the driver’s seat—on a closed course, with a chase car and some special programming—where we found the system resilient and flexible in the way it takes control, as well as far smoother in its inputs than current lane-keeping aids. We were invited to try other things that might also prompt the driver for a takeover. Manually dialing up a downshift did it, for instance, as did suddenly opening the driver’s door a couple of inches.
-
-
2019 Audi A8: Flagship Floats on an Active Suspension
-
Audi: Full Autonomy Still 10 Years Away
-
Audi A8: Review, News, Photos, Full Info
-
-
By then it was well into the early afternoon, and we headed back out with Bergmiller, who suspected that drivers getting a jump on the weekend were creating a nice highway traffic jam. Within minutes, we did indeed find a couple of waves of congestion that were just slow enough to engage the system again; and by then, accustomed to the alerts, we could experience how confident it is in its inputs and how straightforward the handoffs back to the driver are.
-
After a quick farewell to Bergmiller and the Audi team, we’re handed the key to a Q7—with no Traffic Jam Pilot, of course—and headed back out on the autobahn, where we soon hit some more slowdowns, a reminder of how exhausting driving in gridlock can be.
-
- from Performance Junk WP Feed 4 http://ift.tt/2xtmTVz via IFTTT
0 notes
eddiejpoplar · 7 years
Text
Traffic Jamming: In the 2019 Audi A8, We Let Automated-Driving Tech Take the Wheel
-
Our first opportunity to get into the fourth-generation Audi A8 came with an offbeat premise: We’d be heading out into what Audi boasts is the second most traffic-choked area in the whole of Europe, near the German cities of Essen and Düsseldorf—and we’d be doing so with the hope of getting stuck in traffic.
-
This isn’t a true first drive of the 2019 Audi A8, but it was our first opportunity to gain an understanding of how a new feature in the A8 works on a congested highway. Called Traffic Jam Pilot, it’s been specifically developed for SAE Level 3 automated driving—meaning that the driver no longer has to monitor the surroundings continuously and that the vehicle system will alert the driver when he or she needs to retake control. Audi claims it’s a world first.
-
-
There’s a big asterisk next to what we describe, because we weren’t in the driver’s seat. Whether in Germany or back in the United States, for Level 3 automated driving a specially certified test engineer must be ready to take over. In our case, that was Peter Bergmiller, technical project lead for the system.
-
Part Traffic Hunter, Part Road-Trip Bingo
-
“I think we might get lucky!” Bergmiller exclaimed excitedly, homing in on a red-colored stretch of motorway on the navigation system’s live-traffic maps and anticipating that there was about a mile-long traffic jam up ahead. Soon we were in a sea of brake lights, and as our speed fell below 37 mph (60 km/h), the dash display showed a vehicle within white markings, signaling that the system was ready to take over. Bergmiller simply pressed the Auto AI button at the far front of the center console and pulled his feet and hands away from the controls. The A8 was driving itself until further notice.
-
For now, Traffic Jam Pilot only engages if the system can meet a checklist of parameters. It has to to be on a limited-access divided highway; it needs to have a vehicle directly in front and a line of slow-moving vehicles in adjacent lanes; and the system needs to be able to make out lane markings and the edge of the roadway (with a barrier or guardrails, for instance).
-
-
The system is designed to keep doing the driving even if it momentarily can’t follow roadway markings, pointed out Bergmiller. That may not be a situation you’d find in Germany. It is more likely in the U.S., where lane markings are “of varying quality, we’ll leave it at that,” he quipped.
-
“A Level 3 system takes the responsibility of the driving task, so we also had to change the way you use it,” he explained. “It’s no longer like with the classic driver-assist systems where you just activate the system and whenever it can do something, it does.”
-
Bergmiller made a few quick menu selections on Audi’s MMI and brought up the pan-European TV channel Arte—pointing out that once the system is engaged it’s entirely fine for the driver to watch TV, respond to text messages, or have a face-to-face conversation with a passenger. Under Level 3 conventions, the driver can take his or her focus away from the road, but only to use entertainment or productivity features that are fully integrated with the vehicle’s interface.
-
-
Barely a minute goes by before there’s a warning chime and a visual prompt for Bergmiller to retake control—because we’re leaving the autobahn and transitioning to a divided highway with occasional traffic lights.
-
No Napping in the Driver’s Seat
-
Strict rules still apply to the driver, who must remain with butt in seat and torso pointing forward. Infrared sensors and a camera study eye movements and head motions, respectively, to assure alertness for when Traffic Jam Pilot makes a transition request to have the driver retake the wheel. Once you’ve been caught snoozing, the system can’t be enabled again until after you stop.
-
On the handoff back, the driver is expected, ideally, to take over within 10 seconds, in response to a chime and a pulsating red reminder at the edges of the Virtual Cockpit display. Beyond that, the request gets more audibly and visibly urgent as the car turns down the audio system and jerks the seatbelts in a not at all subtle way. Go no-hands past the 20-second mark and the car will follow an escalation strategy, starting to decelerate, activating its hazard lights, and slowing down within the lane of travel, then unlocking the doors and calling for emergency assistance as it rolls to a stop.
-
Traffic Assist Pilot gets some of the data it uses remotely—such as map data and routing information (it even adapts its driving behavior to local rules)—but it doesn’t rely on over-the-air connections for anything safety critical. All the sensing and decision making is done onboard, with a suite of hardware that includes 12 ultrasonic sensors, four 360-degree cameras, one single-lens front camera, four midrange radar sensors, one long-range radar sensor, and a forward laser scanner.
-
-
-
“Your phone may just crash and restart. This is not an option for the system in a decision to brake or not brake.”
-
– Peter Bergmiller, Audi
-
-
-
Bergmiller emphasized that the team put a tremendous effort into the platform of the vehicle, so it would be very hard to upgrade items such as actuators and brakes part way through the model’s life cycle. All the controllers and actuators are capable of handling the car at the limit—a requirement that essentially sent the A8 through two separate development paths.
-
-
Over an extended lunch stop, we learned more about the processing that gathers all those inputs, makes decisions, controls the vehicle, and even anticipates the future. Each of those sensors has its own strengths and weaknesses (radar is good for seeing two cars ahead, for instance). So inputs are processed individually at the sensor level and then factored together, within a central controller (which Audi calls zFAS), into a complex “sensor fusion” processing to paint a picture of what’s around the vehicle. A second sensor fusion is processed in a different location in the vehicle, with the laser controller, and then the ordered tasks from the two paths are checked. “If one of these two paths says that we have to hit the brakes, this is supercritical, we just do it and go the safe way,” said Bergmiller. “The safer action always wins.”
-
Thinking Ahead
-
The A8 we drove might have been badged Audi Intelligence, but here’s where a different kind of AI (artificial intelligence) comes in. One sensor fusion uses a time-triggered processing path—meaning it can disregard sensor inputs that probably aren’t critical at the moment (learning and prioritizing as it goes) in order to cycle through its operations once every 40 milliseconds—while the other one processes through all the sensor inputs.
-
The system is always, in parallel, predicting the future position of everything for at least four seconds, according to Bergmiller. “So if all the sensors were to go blind, we have this knowledge about the future that we had been predicting right before that, and we can send that information to the brake system,” he said. “Every input from every sensor arrives at the processor with a time stamp from milliseconds in the past, so a certain amount of prediction is necessary in nearly every calculation within automated driving.”
-
-
It can’t be emphasized enough that having the driving computers crash, lag, or reboot is absolutely unacceptable. “Your phone may just crash and restart,” said Bergmiller, with a serious expression. “This is not an option for the system in a decision to brake or not brake.”
-
On the other hand, a concern that does carry over from personal devices—perhaps even more so—is data security. By German privacy law, Audi isn’t allowed to store any personal or vehicle data without a key safety reason behind it. Cars with Traffic Jam Pilot get what’s effectively a black box that can allow retrieval of data only about how the vehicle sees the world and whether or not the driver was in charge at the time of the accident. No images of the driver or the environment are stored, and all data belongs to the driver, who must visit a dealership and sign paperwork to decrypt that data.
-
Software upgrades are easily done, but, for security reasons, Audi does not want over-the-air updates yet; any updates would be administered at the dealership via a secured and encrypted connection.
-
Fine Print and Legalese
-
In the United States, Audi is hoping to keep to a similar privacy standard, but things are likely to be a bit different. As Audi of America put it, the federal government regulates the car, while the states oversee the driver and the observance of traffic laws. Traffic Jam Pilot currently isn’t allowed in any of Audi’s markets without special allowances, such as having this test engineer behind the wheel. Officials are optimistic that in Europe and the U.S. it will soon be allowed—so optimistic that they hope to simply start building the complete hardware set into all A8 models within the next year or so.
-
-
Although Audi thinks of Level 4 automated driving—which would always offer to take the wheel in certain situations, such as on the highway or in a parking garage—as being some years off, there’s potential to expand this Level 3 technology. With the current hardware, for the most part, Audi could extend the A8’s Level 3 capability to non-limited-access divided highways or to higher freeway speeds—up to 81 mph, potentially—although Bergmiller hints that would take a predictive window well beyond the current four seconds, plus more rigorous expectations in handing control back to the driver. Lane-change functionality could be another forward step.
-
Eventually we got a short stint with Traffic Jam Pilot (and the A8) from the driver’s seat—on a closed course, with a chase car and some special programming—where we found the system resilient and flexible in the way it takes control, as well as far smoother in its inputs than current lane-keeping aids. We were invited to try other things that might also prompt the driver for a takeover. Manually dialing up a downshift did it, for instance, as did suddenly opening the driver’s door a couple of inches.
-
-
2019 Audi A8: Flagship Floats on an Active Suspension
-
Audi: Full Autonomy Still 10 Years Away
-
Audi A8: Review, News, Photos, Full Info
-
-
By then it was well into the early afternoon, and we headed back out with Bergmiller, who suspected that drivers getting a jump on the weekend were creating a nice highway traffic jam. Within minutes, we did indeed find a couple of waves of congestion that were just slow enough to engage the system again; and by then, accustomed to the alerts, we could experience how confident it is in its inputs and how straightforward the handoffs back to the driver are.
-
After a quick farewell to Bergmiller and the Audi team, we’re handed the key to a Q7—with no Traffic Jam Pilot, of course—and headed back out on the autobahn, where we soon hit some more slowdowns, a reminder of how exhausting driving in gridlock can be.
-
- from Performance Junk Blogger 6 http://ift.tt/2xtmTVz via IFTTT
0 notes
Text
DEFSEC 2017: Halifax’s Naval Technology Showcase
The annual Canadian Defence Security and Aerospace Exhibition Atlantic (DEFSEC), is the second largest show of its kind in Canada and well reflects the importance of military hardware and software on the Maritime economy. Evolving and growing from the ground show portion of the Nova Scotia International Airshow, DEFSEC has become the see-and-be-seen event for naval personnel and their industrial counterparts. Given the tens of billions of dollars expected to be invested in new ships and their sensor suites in coming decades, this show would seem to have established a permanent place on senior officials’ calendars and a permanent home in downtown Halifax.
The Cunard Centre on the waterfront is a fine facility but for the second year in a row late summer humidity made for a stifling exhibition hall. With HMCS Sackville gently rolling at anchor dockside, participants were finding reasons to stroll near the open doorways for a breeze. More than one smart-ass approached the booth of Bronswerk Onboard Climate Engineering to ask if they couldn’t crank up one of their water chiller units. The ‘Warden of the North’ was feeling more like Norfolk VA.
The humidity however could not put a damper on the up-beat mood among Canada’s defence industry players. With the Liberal government ‘doubling down’ on the previous Conservative commitment of $26 billion for a fleet of Canadian Surface Combatants (CSC), (raising it to nearly $62 billion) industry continues to look forward to years of contracts and investment. Experienced players know the numbers will always change as the years roll by, but the budget number does represent recognition by bureaucrats that the programme is needed. Whether its $26 billion over 15 years or $60 billion over 25 years it should still mean lots of hulls and all the bits and pieces that go on them.
The general public can be forgiven for tuning out the endless adjustments of timelines and budget dollars that go with military procurement but contractors can never take their eyes off the prize. Time and tide have nothing on the vicissitudes of political events and the variety of ‘critical’ elements that fall in and out of favour during a government’s mandate. Change a government, and the calculations usually go out the window with them.
Big players like Lockheed Martin, Irving Shipbuilding or BAE Systems might be able to roll with the punches and pick up enough civilian work to keep the doors open, but hundreds of smaller companies with innovative technologies need to make real sales to stick around. With rapid advances in material sciences, engineering breakthroughs and processes that only a few companies have mastered, there is a hyper-competitive marketplace on the cutting edge of technology. This ain’t your Grandpa’s navy!
Since the peacenik crowd has given DEFSEC a pass the public might not feel the need to pay attention but that’s a shame. Although there were displays of traditional military/paramilitary gear (e.g. Hudson Supplies featured Tasmanian Tiger combat gear and Fastmag IV “stagger stack” multiple magazine holders), as well as innovative weapons and powerful radars, there were also a mind-boggling array of 3-D software programmes and virtual reality training simulators. At a Thursday presentation ViaSat announced it would launch a satellite this year that would provide one terabyte (1,000 gigabytes) per second of high frequency streaming. Even if you don’t know a gigabyte from a giggle you have to be impressed with the ability of companies that can carry battlefield tactical data, navigation and real-time situational awareness, and the latest Adam Sandler flop to your airplane seat on one machine.
Technology is blurring the lines between the old ‘guns-or-butter’ arguments. There has always been a cross-over between materials and technology developed in the crucible of war (radar, rocket/jet engines, etc), and their application in the civilian sphere. Now we seem to see more civilian developments being harnessed for military applications. Between virtual reality simulations (e.g. Halifax’s own Modest Tree) that can help train shipboard firefighting crews without actually torching a ship, to cloud computing and interoperable software that allows architects and engineers to solve construction issues even after the keel is laid, (e.g. STX France’s Smartshape), the ability to get personnel and vessels up and running faster appears to offer limitless possibilities.
While navies have to keep taxpayer dollars in mind for their operations and acquisitions, they also have to offer evidence that they care about the environment. Sure, blasting enemies with high-explosives is the professed mission, but in port or at sea, they also have to keep public sympathy on side. Engines have to be more efficient and cleaner running. Systems have to emit less pollutants and equipment has to be long-lasting. The oil-less water chiller of the aforementioned Bronswerk features frictionless magnetic bearings that quiet their huge machine and increase its efficiency. Landlocked institutions such as hospitals and universities likewise appreciate the cost savings. Companies such as Rocket Performance Ltd. offer environmentally-responsible chemicals to preserve metals and plastics under harsh saltwater conditions. Given the slippery nature of military budgets, it is a wise company indeed that develops products that can design and equip a ship or guide a construction company as it builds a billion-dollar highway or new high-tech hospital.
This correspondent would like to thank DEFSEC Executive Director Colin Stephenson and his crew for their wonderful hospitality and the patient representatives of many companies big and small who very proudly and willingly described their technology.
0 notes