Tumgik
#like the run animation is always jank but here its even worse
foxstens · 2 years
Text
for the life of me i cannot understand why unity’s parkour and stealth is so beloved
0 notes
nexstage · 5 years
Text
LIMBO SILVAM - EPILOGUE (2/2)
The events must have been too intense for the other Crystal Gems to have to struggle emotionally; however, Peridot had already fought enemies worth of being feared like the Cluster, Jasper, heck even the Diamonds. So, bring it on!
"Whatever it is, we can get through it. Spinel was defeated, which means the next step will be easy peasy. Now let's go. If it's something serious then there is no time to waste"
In contrast to her enthusiasm and confidence, the others were engulfed by dread. Connie was the palest and sick-looking, though she was grateful her friend didn't glance at her. The girl had no idea of how to comfort Peridot once she was told what happened with Lapis. In fact, Connie was terrified at the idea of the little gem asking about her other friends. It was too much if they didn't get over this, she was going to throw up!
The 5 members teleported to the island. Or, much to Peridot's horror and shock, what was left of it.
Mask Island, a place full of life, both natural and magic, was the perfect depiction of a wasteland. The once splendid palmer trees forest had become into a gray, depressive moor covered in deep cracks as if a giant in flames had stepped on it. Many kilometers ahead of them, there were bits of green, but it wasn't that much.
The worst part was the hole, no no, the colossal crater on the left side of the island. Peridot's imagination conjured a lot of crazy theories of what must have happened, and with the news of Spinel's attack, her conclusions got darker and more horrible than she feared.
"SPINEL DID THIS?!"
The others looked at each other aghast at her intelligence. At this rate, Peridot would remember everything and things would be harder for them when the time to comfort their friend came.
"You got it right. But there are more things that can explain to you what exactly happened"
"What's the point in an explanation after this mess and Spinel being bubbled, Pearl? And-And this place... This place was the home of the Steven-watermelon clan! Maybe that crazy Spinel used them as hostages and even destroyed their home to hurt Steven, to provoke him. But wait, I remember Connie, Bismuth, Lapis and I were in, well I still can't remember, but anyway, we were kind of confronting her, right? We-we were in this place fighting Spinel, weren't we? Is this the bad news you wanted me to be informed about?"
Amethyst was at the brink of poofing. She couldn't do this, just...just giving more clues to Peridot and hope for her memories to come back instantly and then boom! She would figure out that Lapis was shattered and implode in the most painful way. Constructing this scenario was worse than watching her friend struggling after Lapis left her with the barn.
"Amethyst!" the purple quartz jumped at the shout. Now what?! Couldn't she be given a break to, you know, mope around and figure out what to do next?!
"What, Garnet?"
"Transform into a helicopter and carry us where the proofs are. Quickly"
She groaned but fulfilled the order. This day couldn't get more stressful, and it was just midday!
The fight was anything but pleasant. Small talk or asking question was attempted by Peridot, but Garnet looming over her like a giant statue served as a good warning that it wasn't the time yet to say anything.
The poor green gem was this close to jank her own hair out of her head. What the hell was wrong with her friends? Since when there was so much secrecy?
Now she could understand how frustrating it was for Steven having to deal with everyone lying to him, keeping secrets and pretending some things never happened. Just like his maternal unit, Pink Diamond, did time ago.
Peridot was so immersed in her thoughts that she didn't take into account the pained eyes directed at her. Connie held back as better as she could the wheezes from a panic attack, the poor girl felt like garbage for not blurting the whole truth and make this more difficult than it should be, but Peridot needed her memories to be restored for her to understand the whole context. And after that, they would tell her what happened to Lapis.
The Indian girl grabbed Pearl's right arm for support, for a plan, for another outcome so the worst could be avoided. The veteran gem squeezed Connie's left hand, as a mother would so her apprentice wouldn't feel so lonely and scared.
Garnet, meanwhile, was driving herself mad with future vision, searching for a happier conclusion to Peridot's ordeal, but nothing. No matter the number of possibilities, it always ended up with her friend in tears after knowing what happened to Lapis.
"We arrived" Amethyst declared and landed.
The zone in which the purple quartz had alighted didn't have any signs of life. Nor plants, nor animals, not even a single Steven-watermelon wandering in the area. Just huge fissures, dead vegetation kilometers around them. But then, Peridot noticed something pretty peculiar, "Wait, is that a robot?"
The little Crystal Gem ran to have a better view of it, the others walked grievously, preparing themselves for the long explanation.
The machine was as huge as the one the former technician had built to defeat Pearl years ago. It had three tones of green, and a star-shaped visor, similar to the one she had before being poofed, its left arm ended in a steel sphere with multiple spikes, the right arm in a circular saw. There were no legs, it seemed they had been ripped out from the robot mercilessly.
"This robot... Did I built it?"
"Yeah, dude" Amethyst's voice was dull and lifeless, a big contrast compared to the energy she always carried talking and doing her own thing.
The group observed Peridot attentively, who knew how she would react once she pieced everything together. Meanwhile, the green engineer kept touching and analyzing the once threatening robot in front of her. The sensation of familiarity grew stronger with every second passing; however, the answers were still stuck as if a force prevented them to spill themselves out of Peridot's memory repertoire.
Something didn't make sense.
"If Bismuth, Connie, Lapis and I were fighting Spinel here, how was it possible for me to bring a robot? Did I bring it here or someone else did for me?"
"Lapis helped you" Pearl intervened as delicately as she could "She...bought you time to build it so you could fight Spinel easier"
Breaths were held in anticipation of the worst. Just a bit more, just another detail revealed and Peridot would aks if Lapis was ok and then the tragedy would be told.
"Lapis..." the little gem pronounced too raptured in her musings to hear the nervous gasps behind her.
So she helped her, she wondered if the blue gem was alright. She obviously wouldn't be with the knowledge that Steven was in the hospital, heavily injured and that Mask Island was a wasteland all because of a lunatic Spinel.
Suddenly, a stinging headache assaulted the core of her gemstone. Both hands went to it, massaging it so the pain could stop but it just only got worse. She didn't notice her friends' worried cries or how tight Garnet held her shoulders due to the violent trembling of her petit body.
What she only put her attention on were the desperate voices overlapping each other in her head, the set of images that first were too pixelated to make a sense of now becoming more clear-cut. Then the barrier broke in half and saw everything. Lapis using her hydrokinesis to lift the giant injector, her and the blue gem fighting Spinel in Mask Island, the psychotic gem giving them a hard time until a giant, no no, Alexandrite came to their rescue but even the fusion wasn't enough, her building a robot after Lapis told her to run and hurry, how strong the blows of Spinel were against her little body. The last image on her head was Lapis' anguished face, the svelte gem saying 'I've got you, Peri' before obscuring her vision while pulling her close to her chest; however, there was another thing and that was the huge injector still full of bio-poison being launched at them from many meters high.
"I remember..." Peridot mumbled in shock much to the other's nervousness "I remember everything!"
"A-Are you sure, Peridot?" Pearl approached her cautiously and stopped when the green engineer looked at her in horror.
"We were here, Pearl! Lapis and I were here fighting her, and-and she was so strong, my robot couldn't poof her. I saw it, Pearl! Spinel tore my robot's legs apart! I can still feel it, the punches, the kicks, she beat me down!"
"Peridot, please calm down" Connie held the gem from the shoulders despite wanting to be held in the same way. The worst was getting closer. "Spinel can't hurt you anymore"
'No more physically, but emotionally and psychologically, yes' the girl swallowed, the urges of throwing up were too overwhelming for her to keep doing this.
"How though...?" Peridot's question didn't make sense "How could we defeat her? I mean, look at this place! It looks like hell had been unleashed! Like an asteroid had fallen on the whole island! I bet no one survived!"
'You couldn't be more right about this' Amethyst lamented, feeling helpless, lost. With a glance, she asked Pearl and Garnet for help to prevent a panic attack from Peridot that seemed imminent; the permafusion kneeled in front of the green gem and took off her glasses.
"Half of Mask Island was destroyed and almost all the Steven-watermelon clan was decimated. But there are survivors too. You're one of them"
"I see... I see... That's good." the engineer breathed in and out for some minutes  "They can rebuild this place or find another island to inhabit. Lapis and I can help them with that" Pearl, Connie and Amethyst grimaced "Wait. Is Lapis ok too?"
Garnet stood up slowly without saying anything while the others glanced at her stoic face, waiting for a response with as much dread as Peridot "To the Beach House, now. There is something you need to see there"
The little Crystal Gem opened her mouth to protest for the cryptic answer but the permafusion's kind of menacing posture shut her up. Amethyst transformed into a helicopter and carried them to the island's warp pad. Peridot was surprised that with the level of destruction Spinel inflicted on the place, the warp pad has survived.
They returned to the Beach House in a pillar of light, the collective mood was like a stinky pot where anger, confusion, grief, uncertainty, fear, nervousness, and guilt boiled, mixing in a horrible mess.
Garnet opened the Temple's doors to the Burning Room, entering quickly. The permafusion relaxed a bit before placing herself in front of a red bubble; footsteps were heard from behind her, Peridot and the others watched her stiffened back, the little gem with confusion, the rest of the group mourning.
Garnet exhaled heavily before speaking "I'm sorry, Peridot. Just so you know, you have all of us to get through this. You're not alone"
The former technician raised an eyebrow. Something told her that whatever that was being hidden by Garnet wasn't good and she wouldn't like it.
Her comrade then turned around, Pearl, Amethyst, and Connie braced themselves, Peridot's eyes seemed plates for how big they became due to the heart-wrenching surprise in the permafusion's hands. No, it wasn't the bubble but the set of shards inside of it. Despite the change of color because of the red bubble, Peridot could identify them. There was only one person whose gemstone, beautiful as the ocean, could be formed by those shards.
"L-L-Lapis..." unconsciously, the green gem approached the bubble which Garnet gave to her in silence. Then, without anyone expecting it, the mechanic burst the bubble, freeing the shards. The others were going to do something, but the permafusion stopped them.
Never in her life, the color blue had been so tragic, haunting and devastating. Little hands encircled around the shards, putting them back together and forming a tear-shaped gemstone, or what was left of it.
Again and again, Peridot tried to repair the shattered gemstone of her companion in vain hopes of bringing her back, though nothing happened. Her vision started getting blurry, one, two, ten, so many tears escaped from her eyes and made contact with the shards. Her mind shut everything else out just to concentrate and revive her loved one, but the more she fell to denial, the more desperate she got.
Soon, her body begun trembling while tiny, pitiful sobs came out from her mouth. She wanted to feel her again. To be wrapped up by Lapis' arms whenever she was sad, to see her lovely eyes rolling in fake annoyance and genuine fondness whenever she rambled about a new invention, to listen to every barf joke and laugh of hers, to see the blue gem smile and relax while they watched Camp Pining Hearts.
Peridot wanted Lapis to come back like she did when she appeared and fought Blue Diamond. Being at her side and grinning forever, enjoying the marvelous life they had. The life they would have but would never return to them.
Because Lapis was gone.
The Burning Room was filled by a soul-destroying shriek like no one had ever heard before.
16 notes · View notes
hydrus · 3 years
Text
Version 438
youtube
windows
zip
exe
macOS
app
linux
tar.gz
Hey, this causes errors if you are running from source and using PyQt5 (PySide2 is fine)! All the releases above are PySide2, so they are ok! I will fix this for next week, so if you are source+QtPy5, please hold off for now.
I had a great week overhauling the media viewer's image rendering. Zooming and navigation should be a lot smoother now!
image tiles
tl;dr: the media viewer now zooms and navigates with less lag and flicker
Zooming in a long way, particularly for large images, has been pretty hellish for as long as the program has existed. Historically, the client drew the whole image in memory at the zoom you desired so it could quickly show you the bit it needed on screen. Of course this meant zooming in to 400% on anything above 4k was suddenly taking a very long time to set up and eating a lot of memory to do it. As images have naturally grown over time, the problem has occurred more often and was starting to affect other systems.
My plan to fix this has been to break the image into tiles that then render on demand. The parts of the image off-screen are never drawn, saving CPU and memory and allowing arbitrary zoom. This is a significantly more complicated idea, and rewriting the whole rendering pipeline was always expected to be a multi-week 'big job'. I originally planned to just optimise and tweak the secondary systems and add in some sanity brakes this week, but I ran a couple of small tiling tests and realised if I went bonkers it would be possible to hack in a prototype. So I did!
In the media viewer, images now draw in tiles. It works a little like a browseable satellite map, where when you zoom in and pan about you see squares of data fading in (except in hydrus they appear instantly). You should now be able to zoom in as far as you like on an image pretty quick and you won't have any sudden memory needs.
Furthermore, I have written a cache for these image tiles. This saves CPU when revisiting different images or zooms, so when you flick back and forth between two normal things, it should now be instant! It still takes 20-200ms to view or zoom most images the first time, but going back to that view or zoom within a minute or so should be really smooth. The cache starts at a healthy 256MB this week. I think that will cover most users very well (in screen real estate, it works out to about 35 x 1080p worth of tiles), but you can alter it under the settings at options->speed and memory.
And I did some misc work improving the rendering pre-fetch logic when you browse in the media viewer. Huge files won't stomp all over the image renderer cache any more, which should make browsing through a series of giant images far less jank. If you are feeling advanced, you can now edit the prefetch timing and distance settings too, also under options->speed and memory.
I am really pleased with this week's work, but there are some drawbacks: I did it quick, so I cannot promise it is good. The most obvious bug already is that at around 200-500% zoom you start to see tiling artifacts. I know what causes this (interpolation algorithms not getting full pixel neighbour data from my simple tesselating tiles) and have a plan to fix it (adding a tile border pre-resize, and then cropping). There is also an issue when the 'virtual' image exceeds about 32,000x32,000, so I hacked a zoom block for that. There may be some weird files that render with other stitching artifacts or bad tile data. Note also that hydrus's 'Animation' renderer (the soundless fallback if you do not have mpv support) does NOT use tiling yet, so it still sucks at zooming! Please let me know how you get on!
If you have a steam-powered GPU or a machine with only 4GB of ram, you might like to wait for 439 so I can address any surprise bugs or performance issues.
PTR and account permissions
The PTR is changing how its accounts work. The shared public account is transforming to a 'read-only' account that can only download, so if you want to upload, you'll be going to manage services to auto-create your own privileged account. This is being done to improve janitor workflow for the various petitions, which were all being merged together because of the shared account. With the recent network updates, it will soon be easier for janitors to send simple messages back to these individual accounts, like 'that proposed sibling was not approved because...'.
Unfortunately, various permission and account-management code has not been tested much until now, so as the PTR guys have been trying this stuff out, I have been working to improve bad notifications and workflows. This week I rounded out account permissions testing with uploading. Hydrus no longer tries to upload content the current account does not have permission for, and if you end up in that situation, popup messages now tell you what is going on. It also catches if your account is currently 'unsynced', with instructions to fix.
Similarly, under 'manage siblings/parents', you can now see and edit all tag repositories (previously, they were hidden if you currently had no permission), but you get a label telling you if you don't have permission.
full list
media viewer:
I have hacked in tile-based image rendering for the media viewer. this has always been planned as a larger, longer-term job, but the problem of large images is only getting worse, so I decided to just slam out a prototype in a week. if you have a steam-powered GPU or 4GB ram, you might like to wait until next week to update so I can iron out any surprise bugs or performance problems
images are now cut into tiles that are rendered on demand, so whenever the image is zoomed larger than the media viewer window, only those tiles currently in view have CPU and memory spent on resizing and storage. as you pan around, new tiles are rendered as needed, and old discarded. this makes zooming in super fast and low memory, even for large images!
although I am happy with this, and overall we are talking a huge improvement on previous performance, it is ugly fast code. it may fail for some unusual files. it slices and blits bitmaps around your video memory much faster than before, so some odd GPUs may also have problems. I haven't seen any alignment artifacts (1-pixel thick missing columns or rows), but some images may produce them. more apparent are some pretty ugly tile artifacts that show up between 200% and 500% zoom (interpolation algorithms, which rely on neighbour pixels, are missing border data with my simple system). I will consider how best to implement more complicated but stitch-correct overlapping tiles in future
futhermore, a new 'image tile' cache is added. you can customise size and timeout under _options->speed and memory_ like for images and thumbnails. this is a dedicated cache for remembering image resize computation across images and zooms. once you have seen both situations once, flicking back and forth between two images or zoom levels is now generally always instant! this new cache starts at a healthy default of 256MB. let's see how that amount works out IRL--I think it will be plenty
I tuned the image renderer cache--it no longer caches huge images that eat more than 25% its total size--meaning these images only hang around as long as you are looking at them--and the prefetch call that pre-renders several files previous/next to the current image no longer occurs on images that would eat more than 10% the cache size. this should greatly reduce weird flicker and other lag when browsing through a series of mega-images (which before would stomp through the cache in quick succession, barging each other out of the way and wasting a bunch of CPU). in real world terms, this basically means that with an image cache of 200MB, you should have slower individual image performance but much better overall performance looking at images with more than about 5k resolution. the dreaded 14,000x12,000 png will still bonk you on the head to do the first render, but it won't try to uselessly prefetch or flush the whole cache any more
if you are currently looking at a static image, neighbour prefetch now only starts once the image is rendered, giving the task in front of you a bit more CPU time
new options for prefetch delay and previous/next distance are added to 'speed and memory'
note this does not yet apply to the old hydrus animation renderer. that still sucks at high zoom!
another future step here is to expand prefetch to tiles so the first view of the 'next' media is instant, but let's let all this breathe for a bit. if you get bugs, let me know!
due to a Qt issue, I am stopping zoom-in events that would make the 'virtual' size of the image greater than 32,000x32,000
.
account permission improvements:
to group sibling and parent petitions by uploader (and thus help janitor workflow), the PTR is moving to a system where the public account is download-only and accounts that can upload content are auto-generated in manage services. this code has not been tested much before, and it revealed some very bad reporting and handling of current permissions. I move this forward this week:
if your repository account is currently unsynced from a serious previous error, any attempt to upload pending data will result in a little popup and the upload being abandoned
manage tag siblings and parents will now show service tabs even if the account for those services does not seem currently able to upload tags or siblngs
if your repository account is currently unsynced from a serious previous error, this is now noted in red text in manage siblings and manage parents
if your repository account does not have sibling/parent upload permission, this is now noted in red text in manage siblings and manage parents. you will be able to pend and petition siblings and parents ok
if your repository account does not have mapping/sibling/parent upload permission of the right kind, your client will no longer attempt to upload these content types, and if there is pending count for one of these types, a popup will note this on an upload attempt
.
the rest:
added https://github.com/NO-ob/LoliSnatcher_Droid to the Client API help!
improved some error handling, reporting, and recovery when importing serialised pngs. specific error info is now written to the log as well
fixed a secondary error when dropping non-list, non-downloader pngs on Lain's easy downloader import window, and fixed a 'no interesting objects' reporting test when dropping multiple pngs
added a 'cache report mode' to help debug image and thumb caching issues
refactored the media viewer code to a new 'canvas' submodule
improved the error reporting when a thumbnail cannot be generated for a file being imported
fixed an error in zoom center calculation when a change zoom event was sent in the split-second during media viewer initialisation
I think I fixed an issue where pages could sometimes not automatically move on from 'loading initial files' statusbar text when initialising the session
the requirements.txt now specifies 'requests' 2.23.0 exactly, as newer versions seemed to be giving odd urllib3 attribute binding errors (seems maybe a session thread safety thing) when recovering from connection failures. this should update the macOS build as well as anyone running from source who wants to re-run the requirements.txt. I hacked in a catch for this error case anyway, just a manual retry like a normal connection error, we'll see how it goes (issue #665)
patched an unusual file import bug for a flash file with an inverted bounding box that resulted in negative reported resolution. flash now takes absolute values for width and height
next week
Back to multiple local file services. Mostly more backend cleanup and prepping File Import Options and the Client API for talking to multiple locations.
0 notes
cladeymoore · 4 years
Text
Optimizing React Native
by Nick Cherry, Staff Software Engineer
Tumblr media
Over the past eight months, Coinbase has been rewriting its Android app from scratch with React Native. Read about some of the performance challenges we encountered and overcame along the way.
If you’re interested in technical challenges like this, please check out our open roles and apply for a position.
Over the past eight months, Coinbase has been rewriting its Android app from scratch using React Native. As of last week, the new and redesigned app has been rolled out to 100% of users. We’re proud of what our small team has been able to accomplish in a short amount of time, and we continue to be very optimistic about React Native as a technology, expecting it to pay continued dividends with regard to both engineering velocity and product quality.
That being said, it hasn’t all been easy. One area where we’ve faced notable challenges has been performance, particularly on Android devices. Over the next few months, we plan to publish a series of blog posts documenting various issues we’ve run into and how we’ve mitigated them. Today we’ll be focusing on the one that affected us the most: unnecessary renders.
Things Were Great Until They Weren’t
Early in the project, the app’s performance felt fine. It was nearly indistinguishable from a fully native product, even though we hadn’t spent any time optimizing our code. We were aware of other teams facing (and overcoming) performance challenges with React Native, but none of our preliminary benchmarks gave us reason to be alarmed. After all, the app we were planning to build was mostly read-only, didn’t need to display any massive lists, and didn’t require animations that couldn’t be offloaded to the native driver.
However, as more features were added, we started noticing a decline in performance. At first the degradations were subtle. For example, even with our production build, navigating to new screens could feel sluggish and UI updates would be slightly delayed. But soon it was taking over a second to switch between tabs, and after landing on a new screen, the UI might become unresponsive for a long period of time. The user experience had deteriorated to a point that was launch-blocking.
Identifying the Problem
It didn’t take us long to recognize the correlation between UI jank and JavaScript frame rate. After a user interaction, we would commonly observe the JS FPS dropping to low (or negative!) single digits for multiple seconds. What wasn’t so obvious to us was why. Only a month earlier the app’s performance was relatively good, and none of the features we’d added seemed to be particularly burdensome on their own. We used React’s Profiler to benchmark large components that we hypothesized might be slow and found that many were rendering more than they needed to be. We managed to reduce re-renders for these larger components with memoization, but our improvements didn’t move the needle. We also looked at the render durations for a few atomic and molecular components, none of which seemed unreasonably expensive.
To get a more holistic view of where re-rendering was most costly, we wrote a custom Babel plugin that wrapped every JSX element in the app with a Profiler. Each Profiler was assigned an onRender function that reported to a context provider at the top of the React tree. This top-level context provider would aggregate render counts and durations — grouping by component type — then log the worst offenders every few seconds. Below is a screenshot of output from our initial implementation:
Tumblr media
As we observed in our previous benchmarks, the average render times for most of our atomic/molecular components were adequate. For example, our PortfolioListCell component took about 2ms to render. But when there are 11 instances of PortfolioListCell and each renders 17 times, those 2ms renders add up. Our problem wasn’t that individual components were that slow, it was that we were re-rendering everything far too much.
We Did This To Ourselves
To explain why this was happening, we need to take a step back and talk about our stack. The app relies heavily on a data-fetching library called rest-hooks, which the Coinbase Web team has been happily using for over a year now. Adopting rest-hooks allowed us to share a significant amount of our data layer code with Web, including auto-generated types for API endpoints. One notable characteristic of the library is that it uses a global context to store its cache. One notable characteristic of context, as described by the React docs, is that:
All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.
For us, this meant that any time data was written to the cache (e.g. when the app receives an API response), every component accessing the store would re-render, regardless of whether the component was memoized or referencing the changed data. Exacerbating the re-rendering was the fact that we embraced a pattern of co-locating data hooks with components. For example, we frequently made use of data-consuming hooks like useLocale() and useNativeCurrency() within lower-level components that formatted information according to the user’s preferences. This was great for developer experience, but it also meant that every component using these hooks — directly or indirectly — would re-render on writes to the cache, even if they were memoized.
Another part of our stack worth mentioning here is react-navigation, which is currently the most widely used navigation solution in the React Native ecosystem. Engineers coming from a web background might be surprised to learn that its default behavior is for every screen in the Navigator to stay mounted, even if the user isn’t actively viewing it. This allows unfocused screens to retain their local state and scroll position for “free”. It’s also practical in the mobile context, where we commonly want to show multiple screens to the user during transitions, e.g. when pushing onto / popping from stacks. Unfortunately for us, this also means that our already-problematic re-rendering could become exponentially worse as the user navigates through the app. For example, if we have four tab stacks and the user has navigated one screen deep on each stack, we would be re-rendering the greater part of eight screens every time an API response came back!
Container Components
Once we understood the root cause of our most pressing performance issues, we needed to figure out how to fix it. Our first line of defense against re-rendering was aggressive memoization. As we mentioned earlier, when a component consumes a context, it will re-render when that context’s value changes, regardless of whether the component is memoized. This led us to adopt a functional container pattern, where we would hoist data-consuming hooks to a thin wrapper component, then pass the return values of those hooks down to presentational components that could benefit from memoization. Consider the gist below. Whenever the useWatchList() hook triggers a re-render (i.e. any time the data store is updated), we also need to re-render our Card and AssetSummaryCell components, even if the value of watchList didn’t change.
https://medium.com/media/c2ae5f4f5b5a8bb8fd6b4fa563d3d246/href
When applying the container pattern, we move the useWatchList() call to its own component, then memoize the presentational part of our view. We’ll still re-render WatchListContainer every time the data store updates, but this will be comparatively cheap because the component does so little.
https://medium.com/media/363287c1a1d57d694b7ffe056a2798e2/href
Stabilizing Props
The container pattern was a good start, but there were a few pitfalls we needed to be careful to avoid. Take a look at the example below:
https://medium.com/media/32e5d59fba456f6e4fc011b3cbdfd0e0/href
It may appear that we’re protecting the memoized Asset from data-related re-renders by hoisting both useAsset(assetId) and useWatchListToggler() to a container component. However, the memoization will never actually work, because we’re passing an unstable value for toggleWatchList. In other words, every time AssetContainer re-renders, toggleWatchList will be a new anonymous function. When memo performs a shallow equality comparison between the previous props and the current props, the values will never be equal and Asset will always re-render.
In order to get any benefit from memoizing Asset, we need to stabilize our toggleWatchList function using useCallback. With the updated code below, Asset will only re-render if asset actually changes:
https://medium.com/media/ad42c990241d0c5b6ca9a9b17ee20791/href
Callbacks aren’t the only way we can inadvertently break memoization, though. The same principles apply to objects as well. Consider another example:
https://medium.com/media/e198535efb18f344cfb9fd4533136c87/href
With the above code, even if the Search component was memoized, it would always re-render when PricesSearch renders. This happens because spacing and icon will be different objects with every render.
To fix this, we’ll rely on useMemo to memoize our icon element. Remember, each JSX tag compiles to a React.createElement invocation, which returns a new object every time it’s called. We need to memoize that object to maintain referential integrity across renders. Since spacing is truly constant, we can simply define the value outside of our functional component to stabilize it.
After the following changes, our Search component can effectively be memoized:
https://medium.com/media/42aa61dba9ca41ff91e087fbb55fd6b1/href
Short-Circuiting Renders on Unfocused Screens
Memoization significantly reduced render counts / durations for each screen. However, because react-navigation keeps unfocused screens mounted, we were still wasting valuable resources re-rendering a great deal of content that wasn’t visible to the user. This led us to start digging through react-navigation’s documentation in search of an option that might alleviate this problem. We were hopeful when we discovered unmountOnBlur. Toggling the flag to true did reduce our renders considerably, but it only applied to unfocused tabs’ screens, keeping all of the current tab stack’s screen mounted. More damningly, it resulted in a flicker when switching between tabs and would lose the screen’s scroll position and local state when the user navigated away.
Our second attempt involved putting screens into suspense (falling back to a fullscreen loading spinner) by throwing a promise when the user navigated away, then resolving the promise when the user returned, allowing the screen to be presented again. With this approach, we could eliminate unnecessary renders and retain local state for all unfocused screens. Unfortunately, the experience was awkward because users would briefly see a loading indicator when returning to an already visited screen. Furthermore, without some gnarly hacks, their scroll position would be lost.
Eventually we landed on a generalized solution that prevented re-rendering on all unfocused screens without any negative side effects. We achieve this by wrapping each screen in a component that overrides the specified context (rest-hooks’ StateContext in this case) with a “frozen” value when the screen is unfocused. Because this frozen value (which is consumed by all components/hooks within the child screen) remains stable even when the “real” context updates, we can short-circuit all renders relating to the given context. When the user returns to a screen, the frozen value is nullified and the real context value gets passed through, triggering an initial re-render to synchronize all the subscribed components. While the screen is focused, it will receive all context updates as it normally would. The gist below shows how we accomplish this with DeactivateContextOnBlur:
https://medium.com/media/9936dceff26615278ed01c25f8c1af06/href
And here is a demonstration of how DeactivateContextOnBlur can be used:
https://medium.com/media/3fa33dd2b4aee447efd079ad50f68dd6/href
Reducing Network Requests
With DeactivateContextOnBlur and all of our memoization in place, we had dramatically reduced the cost of unnecessary re-renders in our app. However, there were a few key screens (i.e. Home and Asset) that were still overwhelming the JavaScript thread when they first mounted. Part of the reason this was happening was because each of these screens needed to make nearly a dozen network requests. This was due to limitations of our existing API, which in some cases required n+1 queries to fetch the asset data our UI required. Not only did these requests introduce computational overhead and latency, but whenever the app received an API response, it would need to update the data store, triggering more re-renders, reducing our JavaScript FPS, and ultimately making the UI less responsive.
In the spirit of delivering value quickly, we opted for the low-cost solution of adding two new endpoints — one to return watchlist assets for the Home screen and another to return correlated assets for the Asset screen. Now that we were embedding all the data relevant to these UI components in a single response, it was no longer necessary to perform an additional request for each asset in the list. This change noticeably improved the TTI and frame rate for both relevant screens.
While the ad hoc endpoints benefited two of our most important screens, there are still several areas in the app that suffer from inefficient data access patterns. Our team is currently exploring more foundational solutions that can solve the problem generally, allowing our app to retrieve the information it needs with far fewer API requests.
Summary
With all the changes described in this post, we were able to reduce our render count and total time spent rendering over 90% (as measured by our custom Babel plugin) before releasing the app. We also see far fewer dropped frames, as observed via the React performance monitor. One key takeaway from this work is that building a performant React Native app is in many ways the same as building a performant React web app. Given the comparatively limited power of mobile devices and the fact that native mobile apps often need to do more (e.g. maintain a complex navigation state that keeps multiple screens in memory), following performance best practices is critical to building a high-quality app. We’ve come a long way in the past few months, but still have plenty of work ahead of us.
This website contains links to third-party websites or other content for information purposes only (“Third-Party Sites”). The Third-Party Sites are not under the control of Coinbase, Inc., and its affiliates (“Coinbase”), and Coinbase is not responsible for the content of any Third-Party Site, including without limitation any link contained in a Third-Party Site, or any changes or updates to a Third-Party Site. Coinbase is not responsible for webcasting or any other form of transmission received from any Third-Party Site. Coinbase is providing these links to you only as a convenience, and the inclusion of any link does not imply endorsement, approval or recommendation by Coinbase of the site or any association with its operators.
All images provided herein are by Coinbase.
Optimizing React Native was originally published in The Coinbase Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
from Money 101 https://blog.coinbase.com/optimizing-react-native-7e7bf7ac3a34?source=rss----c114225aeaf7---4 via http://www.rssmix.com/
0 notes
tech-battery · 4 years
Text
The Microsoft Surface Duo is Really Bad
The Microsoft Surface Duo has been in my hands for 72 hours and I can’t bring myself to use it much longer. After just three days, this will join the Motorola Razr on the list of the most deeply unpleasant devices I’ve ever touched, though there is a touch of hope here that the Razr didn’t have.
The hardware, while pretty and premium-feeling, is so weirdly presented from a modern mobile device perspective, but the software might be the biggest problem area of all. There is a lot of bad going on here, some of which could be addressed (software), while a big portion (hardware) is stuck with questionable decisions made by its creators.
This isn’t a review, because this device needs a lot of work, so instead here’s what Microsoft should address before anyone considers buying a Surface Duo 2. That’s right, I’m already looking to the next generation device because this version should be skipped unless Microsoft turns into a magician with Android software.
Hardware trying to hide its many flaws
Since the software is the critical piece here because it can be fixed, let me first run through my thoughts on the hardware.
I think Microsoft will get a bunch of credit for building a premium device, but let’s be honest – this thing is by no means close to being perfect. Building a fancy hinge and surrounding it with thin glass and metal should not distract you from a handful of obvious flaws. The missing pieces should not be played down and Microsoft doesn’t deserve a pass for leaving out NFC or a real camera or stereo speakers or some form of a display on the front.
Designed like a laptop, which aren’t meant for hands
Holding the Surface Duo can be an elegant experience, when it’s unfolded and both hands bend its beautiful hinge system to the perfect angle, letting you dive into content on each screen as you settle into the couch. But when you need to get something done, which often means folding the phone back to get on a single screen, the design immediately lets you know that it wasn’t made for this.
Pick up your current phone and feel where the glass meets the metal side panels. It’s smooth, right? You can’t actually feel sharpness or the edge of the glass in most cases. On the Surface Duo, it’s all sharp glass edges no matter where you grip from the side. It looks like Microsoft glued each panel onto a metal frame in a way that I could probably slip a fingernail behind it and peel it off (I’m not going to try).
Outside of the glass, the hinge, when folded, is sharp as well. So again, if you are in one-handed mode, depending on the hand you have the device in could mean a squared-off point stabbing you in the palm. It’s pretty unpleasant. Now, you may think, “Kellen, you are supposed to use it open with both screens,” but you aren’t really. The idea here is a device that meets your needs and can convert into a shape or style at any moment depending on the task. Typing is easier when the device is folded back, and that’s something you do on phones a lot.
What Microsoft designed with Surface Duo is a mini laptop. Like, if you took a picture of this with nothing to reference the size, you’d think it was a Surface laptop of some kind. The problem with that is laptops aren’t meant to be held. They are meant for laps or desks or tabletops, so you get this odd shape that isn’t hand-friendly for use cases where you might need a phone or a device more manageable with a single hand. And good luck ever opening it with one hand.
By designing a laptop, they also left out an external display, yet with an operating system like Android, you can’t really live without one. Notifications and always-on information are critical to staying productive with modern smartphones, but with Surface Duo, you almost have to buy a smartwatch to help keep you informed of your phone’s status since it can’t do it. I don’t know how Microsoft could have implemented a display that wouldn’t destroy their ultra-minimal aesthetic, but they certainly should have tried to figure out a way. I don’t get paid enough to do it for them.
Microsoft clearly worked so hard to make this device perfectly uniform, with matching interior and exterior glass placements, identical pieces on each side of the hinge, and no camera humps or exterior features, but that might have been a poor decision. This device is supposed to be functional in multiple ways, yet it struggles to do that because of its forced limitations.
It’s the worst camera I’ve ever used
As far as the camera goes, I never expected the Surface Duo camera to be any good, but I’m a bit shocked at how bad it actually is. This is low-level, 2012-esque bad. It actually reminds me a lot of the Essential Phone when it first launched, where you had this camera that could barely load, certainly couldn’t snap many pictures in a row, and processed like the summer intern was put in charge of imaging.
I took the Surface Duo with me for a short drive this weekend through smoke-filled Portland and attempted to take pictures from the car. I think I got two, both of which were so blurry I can barely make them out, while the rest were pictures of my lap, thanks to camera lag. The app is slow, the shutter button is never ready when you need it, and the image quality is unbelievably bad, as is the camera app.
The thing is, with the Essential Phone, we watched as they did their best to improve the camera in a hurry and someone ported the Google Camera app over for help. I’m not sure a Google Camera port would even help here.
I hate to hammer on the $1,400 price point over and over again, but Microsoft clearly sacrificed in one of the most important hardware areas of this phone so they can have a clean exterior with no humps and an interior that shuts completely flat. Sure, it looks minimal and pretty, but owning this phone means never being able to take a picture you are proud of in any lighting condition.
NFC…
The NFC thing I just don’t get. It’s a simple, tiny chip that makes paying for items hands-free. It has been a staple spec in almost all smartphones for a number of years now at all price points. Now, I know that Microsoft was developing Surface Duo before COVID destroyed the world and scared us out of touching anything with our hands, but leaving out NFC is something we all hammered OnePlus for back in 2015. It’s 2020, and while I get that not everyone has fully adopted mobile payments, they more than likely will over the next couple of years. If you buy this $1,400 device (!), you’ll never be able to make them.
Anything good about the hardware?
Of course! The AMOLED displays both look great, battery life is way better than I expected (easily full day), the fingerprint reader is quick and handy, and that hinge really is spectacular. I love the weight of Surface Duo and being able to adjust the setup to various angles to let me consume content or stay more productive. Microsoft has certainly done some praise-worthy things here, they just seem to have gotten caught up their own quest for extreme minimalism.
Software they should be embarrassed by
Oh boy, this isn’t going to be fun.
Before we dive in, know that my unlocked Surface Duo is on software version 2020.812.86. This is the update that I picked up straight out of the box and is currently showing from Microsoft as being the newest and most stable. While I haven’t looked at many reviews of this device, it sounds like this update was pushed to reviewers at the last minute and turned the device from unusable to the state I’m in now, which is still bad. I can’t imagine it being worse than this. Good lord.
Jank, lag, jank, lag, jank
When you first fire up Surface Duo, you want to dance between screens, run two apps at once, drop some in the middle to see what happens as they expand across both, and throw as many tasks at it as possible because no other device is setup the way this one is. Unfortunately, in this current early software, nothing works smoothly or enjoyably.
The animations are rough, jittery, and inconsistent in how they take you in and out of apps. I’d blame the 6GB RAM or the Snapdragon 855, but it seems more like Microsoft hasn’t tuned or optimized the OS properly or paid attention to little details.
I can’t tell you how often I wake the phone and it forces me to wait several seconds for it to be ready for use. Or when I swipe between home screens and I see this noticeably slow jitter as it makes that move. Or when only one screen wakes initially after opening and the second screen leaves me wondering if it’s dead or I’m going to need to reboot. This stuff happens all of the time.
There are other little issues too, like in how poorly the home gesture is implemented. For example, Google does this cool thing with the Pixel Launcher on its own phones where you exit an app, and if that app has a home screen shortcut, the animation zooms out back into that app’s spot. I bring that up because Microsoft sort of emulated that with folders that do open in and out from the same spot, yet the apps as you head back home have this weird mind of their own that is only enhanced by poorly done animations.
If that sounds like nitpicking, it is a bit, until you realize how deliberate Microsoft was with a bunch of other animations. They clearly spent a lot of time making the bottom navigation tray app shortcuts slide from right to left as you move between screens or open apps or the app drawer. The full home view preview animation that pops-in as you long-press on a home screen is also quite nice. It’s obvious that Microsoft wants everything to be pretty and precise, it’s just that in some of the most important areas they have fallen so short.
Gestures trying to do too much
One of the key features for Surface Duo is its gesture navigation because this is the way you move apps from screen to screen, expand an app to two screens, and navigate Android. For the most part, Microsoft has a good plan for how this should work, they just need to fine-tune it in several ways.
For one, the swipe up gesture to go home is wildly unpredictable. Sometimes it swipes an app to the next screen when you meant to go home, sometimes it (obviously) wanted you to swipe up further and leaves the app open, or it might not even register what you tried to do.
The navigation area also creeps up too high and often impacts app experiences. Instagram is the best example I have, where a good portion of the bottom navigation bar within the app is unusable at times because the system navigation area overlaps it. Often times tapping on home or the search button won’t do anything because the system thinks you are initiating a swipe gesture.
Because Microsoft turned the side-to-side swipe into a gesture that moves apps from one screen to the next, they killed off the quick app switch gesture from Google. In other words, there’s no easy way to get back to your most recent app except for swiping up and holding to go into the app switcher. That’s not the worst idea, but it’s one of the slowest and most stuttery parts of the OS.
Like the animation situation, Microsoft can fix this (I think), it’s just going to take time.
Notification area needs work
For a device with two screens, it shouldn’t be hard to nail the notification experience, but this has been one of the most frustrating areas. Microsoft will hopefully learn very quickly how important notifications are to Android.
For one, the notification swipe down requires that you reach the top of the status bar and swipe down. You can’t do the middle of the screen swipe down like almost every other phone maker has implemented at this point. Instead, a middle screen swipe brings you into a universal search. That’s fine I guess.
Once in notifications, though, it’s a baffling setup. Microsoft gives you these two huge screens to view everything on, except they made the notification pulldown tiny and skinny so that your notifications don’t show enough info. Look at this image and tell me why they wouldn’t expand this pulldown by a half inch or more on each side.
Microsoft also changed a core interaction with notifications, where tapping on them doesn’t directly open the associated app at first. Instead, if the notification can expand, a first tap does just that – expands it. For Gmail, if you have multiple email notifications stacked, each tap first expands each notification rather than opening that email. Google put a dropdown arrow in notifications to do that for you, so I’m not sure why Microsoft went around this. As someone who is used to the way Android notifications work, the extra taps aren’t welcomed.
I’ve even had issues where no notifications show until I swipe down the area and realize there are several up there waiting for me. Of course, not having any sort of outer display has meant not getting notifications or realizing I had any until I grabbed the phone and opened it.
This is Android and so you have got to nail the notification system. It’s the lifeblood of the OS.
I hope they can fix the touch responsiveness
This could be a hardware issue, but not since the Essential Phone (2nd reference!) have I used a device that felt this behind each swipe or that failed to register this many touches.
Dragging apps across the screen (which is a key feature!) always feels like it’s not properly tracking with your finger. Tapping on apps and waiting for them to open only to realize the device didn’t recognize your touch is a frequent thing. The device missing your gesture swipe up, back gesture in, or notification swipe down happen more times than I can keep track of.
The device also does this move where you fold one half back and flip it over, so that you can switch displays. It’s cool and I’m assuming it uses the gyroscope/accelerometer in the device, but the problem is that it then asks you to double tap on the screen to make that switch. It works the first try like 1 out of 3 times.
It’s just weird to see such a flaw. It’s too bad too, because the displays look great.
So many bugs
A few of the other issues I’ve regularly stumbled upon are slow typing experiences in both Swiftkey and Gboard, OS lock-ups where the gesture navigation breaks, apps often weirdly appear on top of each from one screen to the next, the wallpaper has disappeared and turned to black, the auto-rotate has a mind of its own, and I have had to reboot several times because the system fails to respond to much.
Features I need today or yesterday
Stepping away from those issues, there’s a couple of features I think Microsoft needs to work on immediately to improve this experience.
For one, this device needs an always-on display or double-tap to wake right now. Because they didn’t put a display on the outside, I’ve found myself wanting to unfold and set it on a desk or in a stand to give me easier access to notifications, the time, act as a bedside clock, etc. But Microsoft didn’t do either of those things, so as notifications roll in, the entire screen fully lights up briefly and then it goes back to being dark and locked.
Microsoft also really needs to work with developers to get more apps into the dual-screen mode. None of Google’s apps that I tested work in split screen or across both displays. As someone who doesn’t really use a single Microsoft service, this basically eliminates one of their biggest demo features. I couldn’t even test it. I’m guessing I won’t ever, unfortunately.
It feels bad, man
I hate doing write-ups like these because I see the potential with Surface Duo. In fact, when I’m chillin’ on the couch at night and need only to watch Twitch or YouTube TV or browse Twitter or look at Instagram or catch-up on news, it’s pretty great. Having two screens this close to one another that move around with a really cool hinge system is an experience I could totally get used to. Using it to cook this weekend was fun, letting my kid play games and watch shows on it in play forts he had built all weekend was cool, and whipping it out just to fold and unfold is satisfying in ways a pen click will never top.
The problem here is that Microsoft shipped a device that isn’t close to being ready for prime time. They made far too many sacrifices with the hardware, the software probably needs a half dozen updates of straight bug fixes to be decent, and we need more features in there too. If this was a developer-only device that few had access to, all of that might be somewhat acceptable. Well, that might only be acceptable if it wasn’t Microsoft who built it. I’m quite shocked that they even let this device out the door, but appalled that they did and also want $1,400 for it.
0 notes