Tumgik
#its so much easier to use bug as a shorthand and its in the same spirit!!!!!!!!!!!
bughusbands · 7 months
Text
Tumblr media
I've been sitting on this for TOO LONG but look! I made myself a bugsona! His name is Norman Worman and he is a totally normal guy :)
(i know velvet worms aren't bugs but you cant stop me)
35 notes · View notes
oudenoida · 4 years
Note
Can't sleep - raster and mal
The title of double agent, Raster Vellic thought, was profoundly misleading. There were more than two masks he wore on a daily basis, and the number of lies each mandated was a heavy burden that he could feel himself starting to wobble under. The idiot son, the covert scientist, the spy, the turncoat, the saboteur, the... lover? Having to remember what affectation, what lynchpin lie, what dancing deceit each one was built upon was a complicated set of maneuvers he felt himself making up the steps for as he went along; and looking out over the star-speckled ocean of Chandrila he wondered if, after taking off mask after mask after mask after mask, there was any kernel of truth buried in the heart of him at all. 
On the surface the lie at the center of his reason for being on Chandrila was easy enough. The Interplanetary Sabacc Championships were held there every year, and pretending to have a gambling addiction was a phenomenally easy way to explain why large sums of credits went missing from his Vellic Industries spending account. Far easier, at least, then attempting to explain to an accounting droid that he was spending the money on difficult to acquire parts and smugglers. 
The actual reason for their trip here was that Raster was arranging a pickup of refined focusing crystals from a black market agent he’d had dealings with in the past. They’d give the rebellion a range advantage after he outfitted their blasters with them and they truly needed any advantage that they could get. The initial meeting had been accomplished and after some particularly tense negotiations in the steam room; theoretically because it was one of the only places in the hotel that wasn’t bugged but also because it made it easy to see if anyone was wearing a wire, a price had been settled upon. It was exorbitant, but, he needed what he needed and he was willing to pay to make sure the rebellion had the upper hand. 
But unfortunately there were several days of downtime between the deal being arranged and the pickup happening which left his calendar wide open; and despite outward appearances Raster Vellic loathed idleness. 
Tumblr media
There weren’t enough forms of entertainment in the galaxy to calm Raster down when he was deep in the throes of anxiety about a plan coming together. He’d filled half a notebook in tight machine-precise shorthand of plans, contingencies, and emergency plans to be delivered to Rebel leadership whenever Mal made it back to Yavin-4. What to do if he was compromised and needed an extraction from Vellic Prime, what to do if he was compromised and unable to make it off of Vellic Prime and had to implement the dead man’s switch he’d been working on in secret, caches and dead drops he’d left at various places, what they would need to change about tactics and plans if he was discovered… anything he could think of to be helpful in any small way but he’d run out of ideas hours ago and he wasn’t any closer to sleeping than when he’d started. He’d thought about calling Mal. His fingers had been hovered over the codes to open their secure channel a dozen times but had always second guessed and thought better of it. 
There wasn’t any easy way to try to quantify or qualify what exactly it was that he and Luck Maloris were; and as a scientist that frustrated the fuck out of him. 
He couldn’t even pinpoint when it had turned from animosity to begrudging comrades in arms to whatever romance-adjacent thing they were now. One minute it had been barking in the entry hall of Hawen Vellic’s estate for the pilot not to get grease on his bags, the next it had been sitting in a cockpit watching the radar in an asteroid belt around Polis Massa, elbow to elbow and all he could think about was the searing heat of Mal’s skin burning through the thin fabric of his over shirt, and then the next someone was slamming someone else against a corridor wall, lips hungrily finding any scrap of skin where they could beat a morse code of truth between two consummate liars. Even now Raster thought he could smell the ghost of vanilla and wood smoke on his clothes, a reminder of the silence and safety of hyperspace and what it allowed them to share. 
He thought he could see a path of wear in the lush carpet of his suite, having been pacing it like something caged and yearning to stretch for hours. A glance at the clock told him there were still long hours before the sun rose brilliant and carnelian over the gentle waves and in an instant he knew that the night was going to be too long. Raster was halfway to the elevators, still dressed far more casually than he normally appeared in public, before the suite door shut. In the middle of the night he’d hoped that he’d make it to the landing pads unseen, but an overzealous desk clerk clocked him. 
“Evening Mr. Vell-” 
“Speak to me again and I’ll have you fed to a zoo exhibit.” The harsh tap of the cane against the agate floor punctuated his words as he continued towards the landing pad, not slowing as he looked at the clerk, “Understood?”
To his credit the poor clerk only nodded mutely and Raster gave a razor thin smile, “Good. Glad we’re on the same page. If I walk onto the landing pad and see a single employee I’m going to get my father to use this place as his next testing ground.” He could hear the panicked fumbling with a comm panel as the desk clerk cleared out any staff from view from the landing pad he was about to stride across. He didn’t necessarily need a lot of witnesses to the sight of him sleepless and yearning. 
It appeared as if he had properly put the fear of death by experimental weaponry into the poor resort employee because the landing pad was deserted and quiet as he exited the main building, curly hair buffeted slightly by a salty breeze that smelled so much like home. With no ships currently arriving or departing the pad was lit only by basic emergency lights, but one passenger ship still had its lights on, a beacon in the murky darkness of Chandrila’s early morning. Raster knew it wasn’t Mal’s favorite thing to fly, he’d much prefer to be in his on X-Wing but when ‘working’ for Raster he had to at least keep up appearances, which meant he was stuck in the cockpit of the Corona for the duration of their trips. The ramp was still down and Raster strode up it softly, hefting his cane over his shoulder to avoid the ringing announcing his steps. Pausing at the entrance to his ship he could feel a soft smile tugging at the corner of his lips as he looked at the back of his pilot, head half-into an access panel, the sounds of profanity under his breath as he fixed something. 
“I like to think I keep my ship in working order.” He spoke loud enough for Mal to hear him but not so loud that his voice inevitably carried out to the employees of the resort hiding just out of sight, “What are you in there breaking now?” 
He was rewarded with a profane hand gesture as the Mirialan withdrew from the access panel, wiping his hands on a rag stuffed in one pocket, “Your inertial dampeners are keyed up way too high. Sends your fuel efficiency plummeting. I’m just evening them out.” 
“Yes because ruining my careful tuning is exactly what most people do in the middle of the night. Can’t sleep?” Raster crossed the threshold into the ship and closed the hatch behind him, the soft hiss of it sealing the only sound between them for a long moment as he looked over at his pilot. “Did you know this is where my mother was from? Chandrila is where I was born actually.”
“Don’t like sitting around doing nothing.” Mal hadn’t moved to close the distance between them with his response and so Raster did so, stepping into the much shorter man’s space and looking down, gently and gingerly raising a hand to slide a thumb down a gently green jawline, just barely brushing Mal’s lip as he did so. The pilot didn’t respond to the bait of offered history, looking up and into Ras’ dark eyes as he waited him out. 
“Most people don’t think of ship maintenance as a substitute for sleep though.” It was so quiet in the ship Raster thought he could hear his own heartbeat echoing off the walls as Mal’s head dipped ever so slightly to press a kiss to the pad of his thumb. 
“We’re not most people though. You have to ask, Vellic. I’m not a mind reader.” 
The silence deepened and thickened; not the tense brittle silence of unknowing but something that could support and shelter the two men inside of it. Raster’s hand drifted down to Mal’s chest, gently grazing the exposed sliver of flesh from an unbuttoned uniform shirt. His hands spent so much time gripping wires or metal or plastic that it always shocked him how soft and warm another person was. They locked eyes and Raster’s mouth opened and shut several times, the hitch in his breath the only indication that something was lurking in the back of his throat attempting to be said. 
“.... Luck.” 
Mal’s hand settled on Ras’ hip, a steady support as they stood frozen in their small sanctuary, “Say it, Ras. Just spit it out.” 
Bringing his head down Raster nuzzled into the crook of Mal’s neck, a series of small kisses pressing a constellation into warm skin, some celestial map to try to lead him to the comfort he was seeking. “Can I spend the night with you?” 
A laugh was the first response, “It’s your ship, Mr. Vellic. Don’t really think it’s my call.” 
Pulling back Ras narrowed his eyes and glowered at the mirth in Mal’s eyes, “Not what I mean, Maloris, and you fucking know it.” 
Mal’s fingers slowly unbuttoned Raster’s shirt, sliding the thin fabric over and off smooth shoulders as he corralled him back towards the pilot’s quarters, “I don’t know if you’ll fit in the bed, you freak of nature, but we can certainly make a go of it.” 
As they careened towards bed, slamming into several walls as their faces were occupied with more than trying to find their way, Raster’s hand managed to find the switch to kill the lights and the ship fell into a dimness illuminated only by emergency running lights. He didn’t need the lights to see his way, though. With the feeling of Mal’s skin against his he knew exactly where he was going. 
2 notes · View notes
Text
Copying over a bunch of comments I made on reddit, about Elm.
("Anyone using Elm in production?")
My team is. I have mixed feelings, along the lines of "I'm not a fan, but I wouldn't be surprised if I liked all the other options less". Being able to integrate with Haskell is very nice.
I haven't been looking forward to this release, which removes native modules (i.e. makes it much harder to interop with JavaScript; ports still exist, but they don't compare). It also removes user-defined operators, which I think is a shame.
("What parts of Elm do you like most and less?")
They're not really "parts of Elm", but...
I really like autogenerating the API. If I change my backend types, my frontend won't compile until I change that too. We've had some bugs with this (mostly in Haskell libraries), but for the most part it works fantastically. I just wish Haskell's records were as nice as Elm's.
Oh, and the timetravel debugger is super great. You can step back through the history of your page state and see exactly where things went wrong.
My list of dislikes is longer, but I should clarify that my prior frontend experience was stuck in 2012. And, well, I'm still not entirely convinced that we shouldn't all just go back to JQuery; but I can't compare Elm to any of its actual competitors. Just to my expectations, which may not be realistic.
Most concretely, there's a lot of boilerplate. If I add a new page, or a new widget, I need to edit several different places to tell them about it. There's no way to use Elm widgets like regular HTML ones, so if I have a lot of them there's just line after line of "if a message for widget1 comes through, run the widget1 update function on the widget1 model, and update my own model with the new widget1 model and whatever else it returned. If a message for widget2 comes through, run the widget2 update function..." There's no way to generate a dropdown list from a union type that doesn't involve just listing all elements of the union (and no way for the compiler to verify that you've got them all).
Less concretely, there's something of an all-or-nothing vibe. If you're writing in Elm, then either you use Elm components or you put in a bunch of effort to make it work with Javascript components. And the ecosystem isn't mature enough to reliably have great Elm components[1]. (I don't really blame Elm for this, it was probably unavoidable when writing a new language, but it's a pain.) For a while, we were using style-elements, which does the same thing again, you use components specifically written for style-elements. (You can embed elm components, but it doesn't work very well. I do somewhat blame style-elements for this.)
For the most part, I like elm. But it has enough annoyances that I wish I was using something kind-of-similar, without really knowing if that thing could exist even in theory.
And least concretely, there's also a paternalistic vibe, which I get especially strongly from this new release. Native modules are dangerous, so they're forbidden[2]. (We've been using one for handling XSRF cookies, and we have no clue how we're going to handle that now. Most of our others will just be annoying to lose, we'll rewrite with ports or pure Elm. But "annoying for no good reason" is even more annoying.) Some people wrote silly custom operators, so they got taken away. Deeply-nested records are a bad idea, so nested record updates are left super ugly. I'd like to be treated like an adult.
[1]: And I feel like in Javascript, the equivalent components would be easier to beat with a stick until they work. Elm doesn't really let you do that, once an HTML node has been generated you can't edit it, not even to add event handlers. It either goes on the page or it doesn't. That's good for making things work reliably, but sometimes you just need an ugly hack. This is related to the paternalism thing. (Not that I claim it was a deliberate choice. But I don't imagine that if someone offered a PR to add that ability to Elm, that it would be accepted.)
[2]: Honestly, I think removing native modules is the main reason I wouldn't recommend Elm to someone right now. Elm is young, it doesn't do everything yet; and that's fine, it can't be expected to. But in 0.18 it had an escape hatch, and now it doesn't, and I don't trust it not to need one.
(I just now realised that those footnotes were visible on old reddit and my mobile app, but not new reddit. Wtf, reddit.)
Some things I wish I'd mentioned here: the lack of cookie support, more explicitly than I did. And we haven't done much error handling yet, but it looks like that's going to be boilerplate up the wazoo; I wouldn't be shocked if we need to decorate every single Http requst.
(rtfeldman: "We resolved [XSRF cookies] with a few lines of back-end code - hit me up on Elm Slack and I can talk you through it!")
Thanks, I'll get in touch tomorrow.
Um, but at the risk of sounding churlish - I'll say another thing I don't really like about Elm. I get the distinct impression that a lot of discussion happens on slack instead of in public forums (like reddit or stack overflow), making it somewhat inaccessible. And then a lot of the supplementary material seems to come in the form of videos, which are also somewhat inaccessible.
I assume this works for many people, maybe even better than the forums-and-blog-posts thing that I like. But for me personally, it's just a (minor) barrier.
Still, I do appreciate your offer. If we get something that works, I hope you don't mind if I share it outside of the slack?
Update: so rtfeldman's suggestion is, in a nutshell: follow the OWASP recommendations, and instead of using a double-submit cookie (our current approach), switch to custom request headers. Elm sends content-type automatically with Http.jsonBody, and setting another custom header is also really easy.
It's a good suggestion, and we're checking whether it works with our backend auth libraries. If it does work, it'll have other advantages. I'm grateful to rtfeldman for his time and expertise.
Still, I think it reflects poorly on Elm that we're considering changing our auth mechanism because Elm can't handle cookies (at least not in a way that's at all easy to work with, without a hack that's been taken away).
I just can't get over that cookie thing. I think the reason they're not supported is that Evan decided local storage is better. I do not want Evan to make these kinds of decisions for me.
Like, if I was contemplating picking up Elm, and someone told me it couldn't handle cookies, I wouldn't even bother investigating further. Even if I didn't expect to need cookies, what else is missing?
And as it turns out, cookies are the only thing we've found that we absolutely needed Native modules for. But I still think that's a sane reaction.
("Elm's records seem to just be Haskell records with lenses built in.")
Among other problems, Haskell makes it hard to have multiple record types with the same name, hard to have one record type which is a superset of another, and hard to write functions which accept "any record type with this field". Even accessing them is a pain, you need to import everything.
It's improving, but slowly. Meanwhile, elm records have none of those problems. They're not perfect (I'd love to have shorthand syntax for a general update function), but they're miles ahead of Haskell.
(It's possible that "Haskell with a lens library" compares more favorably, but I haven't tried. My team is making tentative explorations into generic-lens, but that's it.)
In my foray on the Slack, I got nosey and seached to see if there was discussion of my comment. There was only a little. Evan recommended against people engaging with me, which I find a little annoying but I think partly because I get the vibe from that of "dismissing a troll", which is probably unfair of me. There's also a place he could be coming from of "this guy is making valid points, but yeah, Elm isn't doing what he wants and arguing about it won't help".
One thing he did say was like, «I guess that guy didn't find my points about optimising the generated code compelling. [ascii shrug]». I didn't reply on reddit because it wasn't intended for me to see, but my response might be something like:
Well yeah, when you were talking about the generated code you said it was important that there was no Javascript in the ecosystem. I don't want Javascript in the ecosystem, I want Javascript in my own codebase, like it worked in 0.18. Maybe I misinterpreted what you mean by "ecosystem", but to me that means "the packages that get shared and distributed".
Are the new optimisations incompatible with native modules? The thought never occured to me, and your comment doesn't give me a clear answer. I suppose it's possible. (But then how do the whitelisted packages handle it?.)
(Why did the thought never occur to me? Because removing native modules was never, that I saw, justified with "enables optimisations". If that was given as the reason, I would honestly find it significantly less annoying. Because then it feels like you're making difficult tradeoffs, not like you're treating me like a child.)
Still, even in that case I feel like you have a bunch of options other than removing native modules. You can say "native modules or optimization, pick one". You can say "native modules work, but optimization might fuck with them, be careful." (And probably we can find some hacks that make it work fairly reliably.) You can say "native modules work, and here are some ways to get the optimizer not to fuck with them." One of those feels like approximately no dev work, another feels like literally none.
Or maybe those solutions are all DOA, I dunno. I don't even know if there's a problem for them to solve.
6 notes · View notes
bedlamgames · 6 years
Text
Q&A #65
Today legendary slavers, race assignments, cum, and an revisiting a topic that I suppose it was time for it to pop up again. 
Anonymous: is no haven a free download on here
Yes, absolutely! The base game will always be free to download. Only thing I’d say is that if you do want to support me that’s deeply, deeply appreciated and there’s an explanation of what the TF edition adds on the patreon. The other is that the update is very much due so you may want to wait a day or so before getting stuck in. 
valhallaimmortan: Is there going to be a way to recruit legendary slavers? Like once you reach a certain point a randomized legendary (special slavers from recruiter and etc.) Shows up and offers to run with you, but for a steep price that depends on how successful you are? And for some reason I'm always choosing dwarves if I need a smith even if it doesn't work like that... too many fantasy books where the dwarves were smith's... well anyways keep up the awesome work!
I enslaved one of my slavers, a Ogre that failed at the Quick as you Like mission but they are still showing up as a slaver and I get gold randomly from them leaving the encampment while they are enslaved. I can use Biomancy on them and I have already trained them to max level blow job training. That is a bug I'm guessing? And do some of the missions only count towards one set of traits? Because my slavers on random missions will only have the first slavers traits counted.
It’s funny you say that is that thanks to a patron request is that there’s an assignment in the very imminent update which has a unique slaver as a potential reward. Looking further ahead I’m planning to do certain races who will always be one offs due to their power/danger who will effect the whole encampment and you will need to consider whether it’s worth trying to recruit one or stay well, well away. 
Thanks for letting me know and will check em out. Assignment one sounds very odd though, any in particular as an example?
Anonymous: Hello! First off, love the game. Couple of bugs/thoughts (not sure where they fall into exactly). 1) When you promote a slave to a slaver, they always get the default [slaver] title, even if they might qualify for Hedge-Witch or Mistress or some such. Any chance that could be fixed? 2) There is something odd about aspects on promotions -- the region/assignment ones (Breaker, Aversol Dilettante) keep crowding out the better trait ones (Practitioner, Lay on Hands) -- any way to reverse that?
Thanks! I admit I’m torn on 1). On one hand it’s good as a shorthand as a quick reminder what the slaver is good at, one the other you can also see it as what they used to do before being a slaver, and in this case that’s being a slave. I think I’m going to leave it for now, but might change my mind on this one. As for 2) yes absolutely. Swapped some round a bit for the update. 
Anonymous: so some thoughts i had in the last few days, i played some other text based h games with quite some interesting mechanics, maybe they can find a place in no haven too; they play with the idea of changing not only the taste but also the effect of the cum in their game, some addictive, corrupting, some psychoactive and others simply alcoholic, the idea of addictive cum was topic in one ama so i thought these could be added in a whole new biomancy category? another thing, you said you will add
Addictive cum via orcs is a thing already, though corrupting cum has potential. Probably not biomancy but I’m planning on revisiting corruption soon and that’d be a good place to expand it from not just orcs. 
Anonymous: you said you will add a job to make tattoos, will this be a stylist like job? combinenthis job with biomancy or other magic to make some special looks/text flavour for hair/skin/limbs/mani and pedi features? not to mention the highly decorative tentacle options. on recent playthroughts in no haven i ve got the feeling that its
way too difficult to manage a small camp if you are looking for specific slaves, since slavers still focus on some slaves who need training, but them on guard and you have problems to send out parties for assignments, assignments, ignore them then they break your slaves. i get the feeling the need for gold and supplies is way too high at the start and way too low in mid and endgame, but that could only be me playing hardcore all the time :/ in short rebalance mid/lategame usage?
all in all keep up the good work, when will there be the next race specific assignment? after biomancer-update? or when will you start a new poll on the hypno thread for it? sry for the wall of text~~~
It is going to be a stylist encampment role. 
I do want to have decisions to make to manage resources in terms of slavers vs. filling roles. Saying that I do have plans to use gold for camp upgrades in various ways to help provide other options. 
There’s a race specific assignment this update. Not sure when I’ll next do a poll on that front, as I’ve got a couple planned already I’d like to do specifically, and I’ve still got some I should do from the last poll i.e doing something with Fallen. So maybe some time but not anytime soon. 
Anonymous: Have you considered doing something like a racial bonus? To me, the races of a fantasy world have distinct advantages over each other, and if say a dwarf had either an innate or a level up advantage to C:Me then the encampment role of metalworker is a lot less interchangeable, but dwarves feel more like dwarves.
There are already really, though I realise it’s backend this might not be immediately obvious. Every race and subtype has their own subtable of more likely traits, and there’s also various assignments where being of certain races gives bonuses. 
Saying that while there is weighting one thing I’ve always wanted to avoid is the ‘planet of hats’ so all races have the potential to do anything rather than being stuck to the stereotypes which is why there’s characters in the examine lore like the lamia scribe and the orc librarian. 
wowsupsexy:  Rags is just html and server functions and it get's crippled by font more then anything, you wouldn't want to see how it looks through a hex viewer though you were worried about being unable to code but rags just turns the input through it's horrid and clumsy ui into code just this one string here: Slaver Array(0)([v: Slavers able to return(0)([v: d4mult(0)(0)])])
Now I would suggest learning Ren'Py or Unity both would require work to learn just getting a ui with buttons and other nice things at first might seem daunting then theres the coding Ren'Py uses python (rather easy to learn) while unity uses C# at it's highest level can use a lot of other ones with it or near to none at all to start out with. Not saying you should just stop using rags right away but if you want something that's easier then being abused.
You might also be worried about not being able to carry over work that you've done already, I would mainly say that you should worry about that seeing as it should be clean to code or script in the fraction of the time it would require for rags even with copy and paste. Would be easier to hunt down bugs etc and if you're worried about encryption and protecting your work html isn't something to hide behind. Trying to get you to improve performance outside of removing the nice font colours.
This is tricky for me to respond to as this has come up repeatedly before, though if you’ve missed the various times this has come up with the same suggestions over and over again before that’s really no fault of yours as no one could be expected to follow all that but me. Believe me, I appreciate that you’re trying to help and any frustration you my notice in this is purely on my own end due to the situation I’m currently in I assure you. 
To try and TLDR as much as possible. Yes I know RAGS is awful. Tried Ren’py, not bad if you want to make a visual novel or use an existing framework, neither of which fits my games, and learning support for any other kind of game is near non-existent I found. Tried Unity, way, way over my head I admit last time I looked into it. Open to looking at it again the future. Did I mention RAGS is awful, but alas it’s what I know and believe me it’s far, far easier to fix bugs in something you know than something you’re still learning. 
To expand a bit more conversion is by no means as simple as making the suggestion. I’m now in my second year of converting Whorelock’s Revenge to Twine and while I’m more than happy to admit it’s not been my main focus, it’s slow, dispiriting work. Even with extracting code from RAGS and using a number of tricks to convert the code en masse it’s a messy process with many things needing to be sorted out and changed. Just that text setting variables being the wrong way round compared to what Twine needs for example takes an age to sort out alone. 
The main absolute number one thing though is that I need to be creating regular content cause a) that’s what everyone cares about, and without it I might as well just knock this all on the head entirely, and b) it’s much more interesting and engaging than re-doing old work. The two months I spent last year just doing Whorelock’s conversion were miserable to put it lightly, and the closest I’ve come to burning out. 
Saying that I do have a plan. Once the Whorelock’s Revenge conversion is done, I’m planning to slow down on No Haven for a while to expand that game as I’d originally planned before putting it on hold. Once I’m in the position to create new content for that game I’ll be in a much better place to look again at converting No Haven for which there are a number of options to explore, some I don’t even want to talk about as they’re not entirely under my sole control. 
1 note · View note
Text
Office Phone Booths: 3 Ways They Increase Productivity
New Post has been published on https://floridaindependent.com/office-phone-booths-3-ways-they-increase-productivity/
Office Phone Booths: 3 Ways They Increase Productivity
Tumblr media
About 70 percent of companies in the United States have an open-plan office.
While this plan certainly has its advantages, such as reduced costs, overwhelming evidence shows they’re now more detrimental than beneficial. Physical interaction among employees has reduced by as much as 70 percent, which is ironic because office plan spaces were designed to boost teamwork and increase productivity.
Companies are realizing this and to fix the issues, they are turning to office phone booths. These are temporary installations in the open workspace and can sit one to four people depending on size.
So, how do these booths increase productivity? Keep reading to find out!
1. Increased Privacy Means Increased Productivity
The designers of open-plan offices certainly felt employees would be more compelled to talk to one another and collaborate in these spaces. To be fair, it does seem easier to go and talk to someone when you can see them. Turns out people are sending emails even when the recipient is across the table.
Why is this the case?
Mostly it’s because these plans don’t offer privacy protections. Perhaps the information an employee wants to share with another is confidential or classified, so they can’t talk openly lest other people eavesdrop. Anything that hinders the flow of communication in the workplace can affect overall productivity.
This is where a booth like Talkbox office booths comes in handy. Employees can pop in and have a confidential meeting because these spaces are soundproof. They can also make calls privately. This increased privacy and autonomy boosts productivity.
2. Office Phone Booths for Meditation
In an open office, there’s always some walking from one end to the other, a phone ringing, a copier machine churning out documents…disruptions are endless. With the chaos of these offices, it can be difficult to concentrate on your work.
Enter the office phone booth.
If you’re the kind of person who works best when the mind is calm, you can use these booths to perform tasks that require lots of focus. Better still, you can use them as your meditation and yoga room.
Once you lock up yourself in the booth, it’s just you and your thoughts. It’s so quiet you can hear your heartbeat! This is a perfect environment to de-stress, calm your mind, and unleash your chi energy. When you step out, you’ll be ready to crank out a lot of work.
3. Reduced Amount of Employee Time Off
Open offices are health hazards. If one employee catches an airborne disease, the risk of spreading to others is quite high. An air conditioning system can even hasten the spread of the bug.
And when most of your workers get ill at the same time, they will call in sick and request for time off. This can leave your company shorthanded, so productivity will decrease.
An office phone booth can solve this problem. When an employee knows they have an illness, they can spend most of their day in the booth instead of getting time off. The risk of disease spread is also significantly reduced.
Time to Embrace Office Phone Booths
As a business owner, you might balk at the idea of installing office phone booths because of the cost. But as we have demonstrated, they could be what your business needs to boost employee productivity, which is good for your bottom line.
Need more business insights? Keep browsing our blog!
0 notes
swapna8-blog · 5 years
Text
10 Reasons Why React Native should be considered for App Development
Tumblr media
React Native App Development is a remarkable framework for the cross-platform Mobile Applications Development.
If you want to make your presence on the Android and iOS platforms but don't want to spend too much time and resources, React Native is what you need. This framework has been used by several popular applications such as Instagram, Airbnb, Walmart, UberEats, Skype, etc.
But wait a minute!
 If you are just starting by developing your application on React Native, I have a few suggestions for you.
 There are many things you must remember before starting to develop a mobile application with React Native.
 Let's see.
1. The purpose of developing the React Native apps
React Native is one of the main frameworks for developing cross-platform mobile applications, but that does not mean that every application must be developed within this framework.
 If your motive is to only create multi-platform mobile applications, you can do this by only using WebView and publishing your application on both platforms. And, people with fewer expectations have already done it. But we can't expect a great and visual performance that amazes them. They are often unable to handle heavy tasks and provide a user experience that is comparable to the original application.
 However, by taking React Native, it is not only fast but also allows us to use basic components that are almost inaccessible from the WebView application.
Also See - Reasons you should build your app in React Native
2. Select the right navigation library
Even after years of standing, React Native has not provided an efficient solution or a replacement for the old navigator component. Most developers still rely on community solutions. It is important to choose the navigation library that is right for your needs from the start of the project.
 There are two types of navigation libraries: JavaScript Navigator and Native navigator. JavaScript navigators are easier to set while Native navigators are more performance-oriented. First, try to find out what you need and then choose from the variety of options available.
Also See - React Native — Future of cross-platform app development
3. Use the Expo-Kit only if needed
Expo-Kit is free and is surely one of the best open source toolchains for React Native, but also comes with several limitations. React Native does not support third-party packages with special native modules, and therefore you must issue a Kit-Expo later.
 You must use the Expo-Kit when you want a playground to quickly create a new application with the help of a native package-reaction-application or you know that all application requirements can be covered from the solutions offered by the Expo.
Also See -
4. React Aboriginal style
React's native style works almost like CSS, but you may be dissatisfied. Native React does not have a waterfall, offers a limited inheritance and many properties are not also supported.
 But as we know, almost all systems have flaws. The important thing is to know if you can find workarounds or alternatives to achieve your goals. All React Native components are flex by default. If you limit the size of your components (user interface and styles on one page), you probably will not encounter any problems.
 5. Scaling your app on different devices and screen sizes
If you are developing an application, you want to target different devices and screen sizes. Here you usually have two options: you can have a different user interface / UX depending on the screen size or choose to have the same for all screen sizes.
 The former is probably the best option for most applications, while the developer usually proceeds with the latter when working on a game. You can identify screen sizes through the Dimensions API or use a third-party package. Such as reacting’s native responsive user interface.
Also See - Top 10 React Native App Development Companies in India
6. Performance
React Native allows us to work in short development cycles and complete the project on time. No heavy components will make you wait until it is charged. It comes with features like hot reloading that speeds up the grouping process and lets you see every change on the emulator or device in no time.
 7. Animations
Are you thinking of creating a Native React application with animations? You may need to rethink your decision. Animations are very important nowadays, but React Native still improves its animation equivalent.
 I'm not telling you not to use animations but to always test the animations on the device. Emulators do not provide a correct return and leave you in uncertainty. You should also use the native driver = true as much as possible to get better performance.
Also, See - Advantages of React Native for Cross-Platform App Development
8. Use the CSS-in-JS encapsulation library
In React Native, we have no choice; we do not just stick to CSS written in JavaScript. But if you want to make CSS writing exciting again and JSX seems more semantic, instead of using the StyleSheet.create method and writing code as pure JavaScript, it's better to use the library. Styled Components. This will greatly enhance your CSS experience.
 9. Convert any web project to mobile easily
One of the main advantages of React Native is its intensive code reuse. You can publish an update for two platforms simultaneously. This makes it easier to detect bugs. Developers who are not engaged in a project can understand it without problems.
 React Native increases productivity and helps increase team flexibility. It also reduces the time spent on quality assurance and makes it easy to convert your Web project into a mobile decision.
 10. Some additional points to consider
There are also many minor things you need to know or take into account when developing a cross-platform application to React Native. Let's see some of them:
 React Native does not support all properties such as picture style accessories, view style accessories, text style accessories, and shorthand properties. You should always prefer the most popular ones in your project. For example, instead of margin, opt for specific options such as bottom margin, top margin, leaving a margin, right margin.
React Native does not have DOM elements. Instead, we work with native elements.
React Native does not support percentage values ​​for all properties. If you try to give a percentage value, the framework will ignore it or your application will crash.
React Native supports flex by default. So, learn it and use it whenever you can. This makes work easy sometimes.
 Are you searching for the React native apps development company? Your search query ends with Fusion Informatics is one of the Leading mobile app development companies in India. We develop the best React native application platform by choosing a framework. We can make and distribute applications quickly to our esteemed clients. Fusion Informatics developed the React iphone app development companies Bangalore and Android App Development in Bangalore using the latest React Native technology.
To know more about the Fusion Informatics visits our portfolio.
To Reach More -
ios apps development Bangalore
iphone app development in Mumbai
iot companies bangalore
Mobile application development company in Mumbai
0 notes
myrtlecornish · 5 years
Text
React Data Layer Series - Part 1
This post is the first part of an 8-part series going in-depth into how to build a robust real-world frontend app data layer. This first post sets the stage for where we’ll be going in the series. The series will start May 20th and one post will be released daily! If you’d like to be notified via email when the series begins, sign up for email updates.
Even though most frontend apps are backed by a web service, building out the data later for such frontend apps is hard. State management libraries are often unopinionated about organizing your data, so you need to decide that for yourself. The state management libraries aren’t always designed with an eye toward accessing data from web services, so setting up that access can take some work. And although browsers now have good support for running apps offline, actually building a real system to do so is fraught with inherent complexity.
Some GraphQL clients such as Apollo have built-in support for both remote and local data, but you still need to make decisions about when and how remote and locally-cached data should interact. If you want to fully escape that complexity, there are a few off-the-shelf data libraries that handle much of this complexity for you—but their features, pricing, and data privacy won’t be a fit for every project.
With all these challenges, how can we efficiently set up robust data layers for our frontend apps? This blog post series is an attempt to answer this question by demonstrating common patterns for building a robust data layer in the context of a React app. The code we’ll build together can be used as the basis for a React/Redux app connecting to a JSON-based web service. The same patterns can be applied in other contexts as well, such as if you’re building an app with a GraphQL client, another frontend framework like Vue, or a native platform. And if you’re considering an off-the-shelf system like Firebase or Realm, these principles will help you evaluate the features they offer and think through any bugs that come up while integrating them.
We’ll apply these principles over the course of building out a project for tracking a list of video games. On the surface, the features couldn’t be simpler: we’ll display a list of video game titles and provide the ability to add additional games. We won’t even be building the ability to edit or delete games! But the apparent simplicity will highlight the depth of complexity under the surface, as we tackle questions like:
How will we organize our data stores?
How will we authenticate to the server? How will we store the access token securely?
How can we store our data offline in the browser? How can we still provide users access to the latest data while online?
Should we allow users to make changes to data while offline? If so, how can we handle this?
This series assumes you have familiarity with modern JavaScript features like:
Array Rest and Spread, and Object Rest/Spread
Arrow functions
Async/Await
Class fields
Destructuring
Object property value shorthands
Promises
If not, the above links go to excellent articles and chapters by Axel Rauschmayer introducing them. Familiarity with modern JavaScript features is an important way to be effective in React development in general and frontend development in particular, so it will be time well spent!
This series also assumes you have a basic familiarity with React, Redux, and connecting to web services (we’ll be using the Axios client library to do so). If not, spend some time with the following guides:
React Docs
Redux Docs
Axios Docs
We’ll also be using the following libraries and formats, but it’s okay if you aren’t familiar with them. You’ll be able to pick up enough about how they work from how we use them in this guide, and you can dive into them more in-depth later as you have need.
The JSON:API format for data interchange.
React Materialize for UI components with a nice look and feel.
Redux Thunk for deferred actions.
Redux Persist to save data offline.
Although I prefer and recommend Firefox for general web browsing, in this guide we’ll be using Google Chrome for some of the features its web developer tools provide when it comes to easily working with service workers for offline purposes.
You’ll also notice that we use the Yarn package manager in place of npm. Yarn connects to the same NPM repository as the npm client; it just provides simpler commands, better performance, and a more predictable use of lock files. We recommend using Yarn for all professional frontend projects.
Why Redux?
React has a lot of different options for state management layers, and Redux isn’t the best choice for everything. Let’s talk through some of the options out there and why you might choose them.
setState() and the useState() hook are built-in to React. We’ll be using these for transient data. One downside is that it’s local to the component, and passing it around the app can get cumbersome.
React Context is an API that was made public in React 16.3 and allows passing data through multiple levels of component.
MobX is a popular state management solution that offers “transparent functional reactive programming,” which is to say that it offers APIs that look like you’re interacting with plain JavaScript objects, but under the hood it’ll kick off reactions to keep your UI in sync.
To learn more about these and many other options, check out a blog post about React State Museum, a project to compare different React state management options.
So why are we going with Redux in this case? A few reasons:
It’s still the most popular state management library in the React ecosystem, so when you do need more than what React provides out of the box, it’s a good choice.
We’re taking advantage of Redux’s centralized data storage to easily persist our data.
Redux’s architecture decouples actions from the changes made to individual items of state (via reducers). As we add more richness to our app like different approaches to offline handling, we will change how reducers work with relatively few changes to the actions dispatched. This is what Redux maintainers mean when they say that if you are only using Redux to make data available globally, you probably didn’t need Redux in the first place. Redux is best when you have benefits to gain from the action/reducer decoupling.
Personally, I have no problem with MobX-style “magic” happening to take care of details under the hood. But one of the advantages of Redux’s explicitness is it can be easier to debug.
Let’s Go
That’s all the introduction we need. Check back on May 20 and we’ll get started creating our app!
React Data Layer Series - Part 1 published first on https://johnellrod.weebly.com/
0 notes
ellahmacdermott · 6 years
Text
Audits and Quality Assurance: Patching the Holes in Smart Contract Security
On July 10, 2018, news broke that cryptocurrency wallet and decentralized exchange Bancor was hit with a hack. A wallet the Bancor team used to update the protocol’s smart contracts was infiltrated, and the $23.5 million vulnerability allowed the hackers to run off with $12.5 million ETH, $1 million NPXS tokens and $10 million of Bancor’s BNT token.
Following the hack, the Bancor team froze the BNT in question in an effort to stanch its losses.
The latest of its kind, the attack is an unfortunate reminder that smart contracts are not foolproof. Even built as they are on the blockchain’s security intensive network, they can feature bugs, backdoors and vulnerabilities that are ripe for exploitation.
Before Bancor, we saw the popular Ethereum wallet Parity drained of 150,000 ETH (now worth just over $68 million) in July of 2017. In November of the same year, Parity lost even more than this when a less-experienced coder accidentally froze some $153 million worth of ether and other tokens.
In perhaps the most infamous smart contract hack in the industry to date, The DAO, a decentralized venture fund, lost 3.6 million ether in June of 2016. The stolen funds are now worth $1.6 billion, and the fallout of the attack saw Ethereum hard fork to recoup losses.
The Why and How: Making the Same Mistake
If three’s company, then The DAO, Parity and now Bancor have become the poster triplets of smart contract vulnerabilities. But they’re not alone in their weakness, and similar smart contract bugs have been exploited or nearly exploited on other networks.
For such a nascent technology, such flaws may be expected, but given the mass sum of funds these contracts are supposed to protect, truly stalwart security measures are not yet routinely employed.
To Hartej Sawhney, co-founder of Hosho cybersecurity firm, the sheer amount of funds at stake is enough of an incentive to attract black hats to these smart contracts, especially if there’s a central point through which they can probe for access.
“There’s money behind every smart contract, so there’s an incentive to hack into it. And the scary part of smart contracts like Bancor is that they’ve coded their smart contracts in a way that gives centralized power to the founders of the project. They’ve put this backdoor in there,” Sawhney told Bitcoin Magazine in an interview.
Sawhney is referring to Bancor’s ability to confiscate and freeze tokens at will, as the smart contracts that govern their wallet and exchange feature central points of control. This degree of control has been widely criticized as centralized to the point that Bancor shouldn’t be able to advertise itself as a decentralized exchange.
And it may have even provided the hackers with an entry point into the network. While Bancor has not revealed the specifics of the hack and its execution, the team wrote in a blog post that “a wallet used to upgrade some smart contracts was compromised.” Sawhney indicated in our interview that “most smart contracts are coded to be irreversible,” while Bancor’s own are completely mutable. The hackers could have exploited — and likely did exploit — the same backdoor that the developers put into place to manage their project.
Bancor aside, Dmytro Budorin, CEO of cybersecurity community Hacken, echoed Sawhney’s belief that the industry’s treasure trove of assets is a powerful impetus for hackers to dirty their hands. He also believes that the relative youth of the technology makes it vulnerable to detrimental exploits.
“Coding on blockchain is something new,” Budorin added in an interview with Bitcoin Magazine. “We still lack security standards and best practices on how to properly code smart contracts. Also, when coding smart contracts, programmers think more about functionality than about security, since a programmer’s main task is to simply make the code work, and security is usually an afterthought.”
Working with new programming languages, security can take a back seat to functionality. More than just the casualty of a steep learning curve, Sawhney believes that security can slip by the eye of software engineers because they “don’t have a quality assurance (QA) mindset.”
With millions at stake and potential holes in the code to exploit, hackers are bound to drum up a scheme to breach these contracts, according to Budorin. Even if a team has audited their code for expected or known vulnerabilities, “a new type of attack can be developed any time and nothing can protect you from this.”
All it takes is a spurt of intuitive thinking to probe a smart contract’s code for an unexplored opening, Amy Wan, CEO and co-founder of Sagewise, iterated in a separate interview with Bitcoin Magazine.
“It is not often that developers are able to write perfect code that works the first time around — and even when that happens the code cannot be adapted to unforeseen situations. Code is also static, which makes smart contracts very rigid. However, humans are anything but static and very creative when it comes to problem solving. This combination creates something of a perfect storm, making smart contracts ill-suited where there are bugs in coding or loopholes/situation changes.”
Wan believes that “technology isn't about tech itself as much as it is about how humans interact with it,” meaning that we “are always going to have folks looking for opportunities to test the shortcomings of technology, which may result in hacks.”
To Wan, smart contracts feature intrinsic vulnerabilities. To make security matters worse, she also holds that they “cannot be amended or terminated (or in technologist speak, evolved or upgraded),” and their static nature renders them susceptible to the dynamic, adaptive strategies of black hats.
“Code aside, with every situation, there are an infinite number of things that can go awry. The rigidity of smart contracts presently cannot accommodate the fluidity of the real world,” she said.
Mending the Achilles Heel
If technical flexibility is the crux of smart contract weakness, then the fix is in the inception and carry-through of their development. Developers should put preventative measures in place to ensure that their code can bend without breaking, the CEOs expressed.
“We need to have a more comprehensive approach in order to solve this problem in the long term,” Budorin argued. “First of all, even though it is impossible to make all contracts absolutely secure, smart contract risks can be reduced. The best way to secure a smart contract is to have a security engineer on staff, conduct two different independent audits, and launch a bug bounty program for a dedicated period of time before deployment.”
Hacken itself facilitates such bug bounties, and the platform, called HackenProof, has seen its white hat community audit and test such industry projects as VeChainThor, Neverdie, Legolas Exchange, NapoleonX, Shopin and Enecuum. Budorin and his team find that bug bounties provide a reliable if tertiary buffer for projects before they go public.
“We believe that the only efficient way to mitigate modern cybersecurity threats is to host bug bounty programs on bug bounty platforms. This is called a crowdsourced security approach,” Budorin explained.
“Bug bounty platforms attract a crowd of third-party cybersecurity experts (dozens if not hundreds at a time) to test the client’s software. Testing can be ongoing for months or even years.”
Sawhney agrees that projects need to house more on-staff security experts to police vulnerabilities, while lamenting the fact that some projects lack a CIO or CTO for this effect. But he also indicated that, in some cases, companies need only to submit themselves to a proper audit to avoid a fate similar to Bancor’s.
“Some of these companies believe that they have the world’s best engineers, so they think they don’t need an audit. And if they get one, chances are they’ve done a third-party audit that was in their favor. Even if they’re getting an audit, some of these audit companies aren’t doing what we deem to be a professional audit. They’re taking the code and putting it through automated tooling. They’re not taking the time to do some of the more manual tasks which includes a dynamic analysis, quality assurance,” he explained.
The manual tasks that Sawhney lauds are at the heart of Hosho’s own auditing processes. They allow Hosho’s team to sniff out coding errors that automated tooling might miss, like discrepancies between the smart contract’s token algorithms and a white paper’s business model.
“So the most manual part of conducting an audit is marrying the code to the words — we call it dynamic analysis. Most of the time when we find errors with a smart contract, we’re finding colossal errors in the business logic. We’re finding everything from mathematical errors to errors in token allocation,” Sawhney said.
He went on to reveal that Hosho’s team includes professionals “from the infosec, devcon communities that are white hats who have spent years doing QA.” QA, shorthand for quality assurance, is a method by which coders test a code for its designed function to check for any malfunctions, defects and other flaws that may render it vulnerable or inoperable.
As Sawhney indicated earlier, part of the reason these projects and their auditors don’t do QA is simply because they lack the professional experience to do so. It’s easier, he claimed, to teach Solidity (a smart contract coding language) to those who know how to conduct sound QA than the other way around.
When lack of QA training or a learning curve isn’t the issue, however, Sawhney suggested that, at times, projects won’t secure a thorough audit because they’re simply cutting corners.
“Sometimes I think it’s sheer laziness and being cheap. They see that cost to code a smart contract was only $10k and [an auditor] is charging $30k to review it. They say, ‘Nah, we don’t need that. We have the best engineers in the world so we’re good.’”
To Sawhney, there’s no substitute for a thorough audit. He also holds that, once an audit has been completed, the smart contract should come with a seal of approval, one that both attests to the audit’s quality and reassures users that no code has been altered after the fact. For Hosho’s work, this comes in the form of a GPG file, a cryptographic stamp that simultaneously functions like a certificate of authenticity and denotes the final (or at least most recent) version of audited code, acting rather like the seal on a bottle of cough syrup that proves it hasn’t been tampered with since it last passed quality control.
“Having central governments, regulators, lawyers, PR firms, investors, token holders — everyone — looking for this GPG file, this sign of approval [answers the question]: Has this code been sealed? Because we can monitor this code once we’ve put this seal on it to prove that no one has touched this code, not one line of this code has been changed since a third party audited it. If code changes you’re opening up room for security vulnerabilities.”
Wan’s own solution offers a different sort of prescription, in that she adds post-audit safety nets like Sagewise’s software as a smart contract’s third line of defense.
“Going forward, I believe that blockchain companies will be able to prevent smart contract disasters by using a smart contract developer whose sole focus is developing smart contracts, hiring a reputable security auditing firm, and including a catch-all safety net into smart contracts, such as Sagewise's SDK.”
The Sagewise SDK integrates with smart contracts to police malicious inputs. It gives developers the chance to freeze the smart contract in question and adjust it accordingly.
“It starts with a monitoring and notification service so users are aware of what's happening with their smart contract. Paired with our SDK, which basically acts as an arbitration clause in code, users are notified of functions executing on their smart contract and, if such functions are unintended, [they have] the ability to freeze the smart contract. They then can take the time they need to fix whatever needs to be fixed, whether that's merely fixing a coding error to amending the smart contract or resolving a dispute,” she said.
A Community Problem, a Community Solution
In our interview, Wan claimed that “[less than] 2 percent of the population is able to read code.” Fewer people still are able to read Solidity, let alone at the level needed to insulate it with airtight security features.
So even if projects and companies want to take the measures necessary to vet and protect their code properly, they may be wanting for talent and resources. This problem will likely be educated out of existence as more software engineers develop a thorough, more sophisticated understanding of Solidity and other smart contract programming languages. More mature coding languages may present a solution to this ailment, as well.
But for the time being, the community can help developers and teams to err on the side of caution. Like an arbiter with skin in the game, people using these services need to step up and demand action and change, Wan believes. Otherwise these types of security breaches will continue to happen.
“[B]ecause much of the population cannot read code, it is difficult for them to hold developers accountable for when they do things like code an administrative backdoor into their smart contract (which many large projects have done),” said Wan.
“Just in 2017 alone, half a billion dollars in value was lost in smart contracts, but that apparently has not been enough to get developers to consider adding additional safety nets or community members to demand them. Perhaps we will need to lose billions more to get people to realize that this isn't how the system should work.”
Sawhney also reiterated this point: “[More] people need to be outspoken, call people out. I think people are scared because the community is tight-knit and everybody knows everybody. No one wants to shun people. There’s not enough self-governance in this space, and I think that’s the biggest step this community needs to take.”
He added, “[not] enough pressure [is] being put on security; there’s not enough regulation around security.”
In an effort to bring self-regulation to the forefront of the industry’s to do list, Hosho is hosting a summit for cybersecurity firms in Berlin. Slated for this September, Sawhney hopes the summit will spawn a self-regulatory organization (SRO) from its attendence, “complete with a certificate for our work, kind of like the Big Four for financial audits.”
Adding to the conversation on self-regulation, Budorin finds that the community would do well to document exploited vulnerabilities. This would create a library of case studies and situations for developers to study and to create the solutions necessary to avoid the same pitfalls in the future.
“...the blockchain community needs to collect, store and analyze all known vulnerabilities that have been found in smart contracts and host regular security conferences that will cover security issues in blockchain and develop security guidelines so that new generation of blockchain programmers is more prepared for these problems,” he said.
The onus is not on the community alone, as the lion’s share of responsibility rests on developers to ensure that their code is as sound as possible before reaching an audience. Together, however, the industry’s community and its architects may combine perspectives to make smart contract hazards an issue of yesterdays.
Until then, Sawhney, Budorin and Wan’s perspectives — and their respective companies’ purposes — provide a healthy reality check for the industry’s pain points. For mainstream adoption and acceptance, these points need be addressed if there is to be any sort of sustained sense of confidence in this new technology.
This article originally appeared on Bitcoin Magazine.
from InvestmentOpportunityInCryptocurrencies via Ella Macdermott on Inoreader https://bitcoinmagazine.com/articles/audits-and-quality-assurance-patching-holes-smart-contract-security/
0 notes
luxus4me · 6 years
Link
CSS-Tricks http://j.mp/2Ck6Ag4
You may have already seen a bunch of tutorials on how to style the range input. While this is another article on that topic, it's not about how to get any specific visual result. Instead, it dives into browser inconsistencies, detailing what each does to display that slider on the screen. Understanding this is important because it helps us have a clear idea about whether we can make our slider look and behave consistently across browsers and which styles are necessary to do so.
Looking inside a range input
Before anything else, we need to make sure the browser exposes the DOM inside the range input.
In Chrome, we bring up DevTools, go to Settings, Preferences, Elements and make sure the Show user agent shadow DOM option is enabled.
Sequence of Chrome screenshots illustrating the steps from above.
In Firefox, we go to about:config and make sure the devtools.inspector.showAllAnonymousContent flag is set to true.
Sequence of Firefox screenshots illustrating the steps from above.
For a very long time, I was convinced that Edge offers no way of seeing what's inside such elements. But while messing with it, I discovered that where there's a will and (and some dumb luck) there's a way! We need to bring up DevTools, then go to the range input we want to inspect, right click it, select Inspect Element and bam, the DOM Explorer panel now shows the structure of our slider!
Sequence of Edge screenshots illustrating the steps from above.
Apparently, this is a bug. But it's also immensely useful, so I'm not complaining.
The structure inside
Right from the start, we can see a source for potential problems: we have very different beasts inside for every browser.
In Chrome, at the top of the shadow DOM, we have a div we cannot access anymore. This used to be possible back when /deep/ was supported, but then the ability to pierce through the shadow barrier was deemed to be a bug, so what used to be a useful feature was dropped. Inside this div, we have another one for the track and, within the track div, we have a third div for the thumb. These last two are both clearly labeled with an id attribute, but another thing I find strange is that, while we can access the track with ::-webkit-slider-runnable-track and the thumb with ::-webkit-slider-thumb, only the track div has a pseudo attribute with this value.
Inner structure in Chrome.
In Firefox, we also see three div elements inside, only this time they're not nested - all three of them are siblings. Furthermore, they're just plain div elements, not labeled by any attribute, so we have no way of telling which is which component when looking at them for the first time. Fortunately, selecting them in the inspector highlights the corresponding component on the page and that's how we can tell that the first is the track, the second is the progress and the third is the thumb.
Inner structure in Firefox.
We can access the track (first div) with ::-moz-range-track, the progress (second div) with ::-moz-range-progress and the thumb (last div) with ::-moz-range-thumb.
The structure in Edge is much more complex, which, to a certain extent, allows for a greater degree of control over styling the slider. However, we can only access the elements with -ms- prefixed IDs, which means there are also a lot of elements we cannot access, with baked in styles we'd often need to change, like the overflow: hidden on the elements between the actual input and its track or the transition on the thumb's parent.
Inner structure in Edge.
Having a different structure and being unable to access all the elements inside in order to style everything as we wish means that achieving the same result in all browsers can be very difficult, if not even impossible, even if having to use a different pseudo-element for every browser helps with setting individual styles.
We should always aim to keep the individual styles to a minimum, but sometimes it's just not possible, as setting the same style can produce very different results due to having different structures. For example, setting properties such as opacity or filter or even transform on the track would also affect the thumb in Chrome and Edge (where it's a child/ descendant of the track), but not in Firefox (where it's its sibling).
The most efficient way I've found to set common styles is by using a Sass mixin because the following won't work:
input::-webkit-slider-runnable-track, input::-moz-range-track, input::-ms-track { /* common styles */ }
To make it work, we'd need to write it like this:
input::-webkit-slider-runnable-track { /* common styles */ } input::-moz-range-track { /* common styles */ } input::-ms-track { /* common styles */ }
But that's a lot of repetition and a maintainability nightmare. This is what makes the mixin solution the sanest option: we only have to write the common styles once so, if we decide to modify something in the common styles, then we only need to make that change in one place - in the mixin.
@mixin track() { /* common styles */ } input { &::-webkit-slider-runnable-track { @include track } &::-moz-range-track { @include track } &::-ms-track { @include track } }
Note that I'm using Sass here, but you may use any other preprocessor. Whatever you prefer is good as long as it avoids repetition and makes the code easier to maintain.
Initial styles
Next, we take a look at some of the default styles the slider and its components come with in order to better understand which properties need to be set explicitly to avoid visual inconsistencies between browsers.
Just a warning in advance: things are messy and complicated. It's not just that we have different defaults in different browsers, but also changing a property on one element may change another in an unexpected way (for example, when setting a background also changes the color and adds a border).
WebKit browsers and Edge (because, yes, Edge also applies a lot of WebKit prefixed stuff) also have two levels of defaults for certain properties (for example those related to dimensions, borders, and backgrounds), if we may call them that - before setting -webkit-appearance: none (without which the styles we set won't work in these browsers) and after setting it. The focus is going to be however on the defaults after setting -webkit-appearance: none because, in WebKit browsers, we cannot style the range input without setting this and the whole reason we're going through all of this is to understand how we can make our lives easier when styling sliders.
Note that setting -webkit-appearance: none on the range input and on the thumb (the track already has it set by default for some reason) causes the slider to completely disappear in both Chrome and Edge. Why that happens is something we'll discuss a bit later in this article.
The actual range input element
The first property I've thought about checking, box-sizing, happens to have the same value in all browsers - content-box. We can see this by looking up the box-sizing property in the Computed tab in DevTools.
The box-sizing of the range input, comparative look at all three browsers (from top to bottom: Chrome, Firefox, Edge).
Sadly, that's not an indication of what's to come. This becomes obvious once we have a look at the properties that give us the element's boxes - margin, border, padding, width, height.
By default, the margin is 2px in Chrome and Edge and 0 .7em in Firefox.
Before we move on, let's see how we got the values above. The computed length values we get are always px values.
However, Chrome shows us how browser styles were set (the user agent stylesheet rule sets on a grey background). Sometimes the computed values we get weren't explicitly set, so that's no use, but in this particular case, we can see that the margin was indeed set as a px value.
Tracing browser styles in Chrome, the margin case.
Firefox also lets us trace the source of the browser styles in some cases, as shown in the screenshot below:
Tracing browser styles in Firefox and how this fails for the margin of our range input.
However, that doesn't work in this particular case, so what we can do is look at the computed values in DevTools and then checking whether these computed values change in one of the following situations:
When changing the font-size on the input or on the html, which entails is was set as an em or rem value.
When changing the viewport, which indicates the value was set using % values or viewport units. This can probably be safely skipped in a lot of cases though.
Changing the font-size of the range input in Firefox also changes its margin value.
The same goes for Edge, where we can trace where user styles come from, but not browser styles, so we need to check if the computed px value depends on anything else.
Changing the font-size of the range input in Edge doesn't change its margin value.
In any event, this all means margin is a property we need to set explicitly in the input[type='range'] if we want to achieve a consistent look across browsers.
Since we've mentioned the font-size, let's check that as well. Sure enough, this is also inconsistent.
First off, we have 13.3333px in Chrome and, in spite of the decimals that might suggest it's the result of a computation where we divided a number by a multiple of 3, it seems to have been set as such and doesn't depend on the viewport dimensions or on the parent or root font-size.
The font-size of the range input in Chrome.
Firefox shows us the same computed value, except this seems to come from setting the font shorthand to -moz-field, which I was first very confused about, especially since background-color is set to -moz-Field, which ought to be the same since CSS keywords are case-insensitive. But if they're the same, then how can it be a valid value for both properties? Apparently, this keyword is some sort of alias for making the input look like what any input on the current OS looks like.
The font-size of the range input in Firefox.
Finally, Edge gives us 16px for its computed value and this seems to be either inherited from its parent or set as 1em, as illustrated by the recording below:
The font-size of the range input in Edge.
This is important because we often want to set dimensions of sliders and controls (and their components) in general using em units so that their size relative to that of the text on the page stays the same - they don't look too small when we increase the size of the text or too big when we decrease the size of the text. And if we're going to set dimensions in em units, then having a noticeable font-size difference between browsers here will result in our range input being smaller in some browsers and bigger in others.
For this reason, I always make sure to explicitly set a font-size on the actual slider. Or I might set the font shorthand, even though the other font-related properties don't matter here at this point. Maybe they will in the future, but more on that later, when we discuss tick marks and tick mark labels.
Before we move on to borders, let's first see the color property. In Chrome this is rgb(196,196,196) (set as such), which makes it slightly lighter than silver (rgb(192,192,192)/ #c0c0c0), while in Edge and Firefox, the computed value is rgb(0,0,0) (which is solid black). We have no way of knowing how this value was set in Edge, but in Firefox, it was set via another similar keyword, -moz-fieldtext.
The color of the range input, comparative look at all three browsers (from top to bottom: Chrome, Firefox, Edge).
The border is set to initial in Chrome, which is equivalent to none medium currentcolor (values for border-style, border-width and border-color). How thick a medium border is exactly depends on the browser, though it's at least as thick as a thin one everywhere. In Chrome in particular, the computed value we get here is 0.
The border of the range input in Chrome.
In Firefox, we also have a none medium currentcolor value set for the border, though here medium seems to be equivalent to 0.566667px, a value that doesn't depend on the element or root font-size or on the viewport dimensions.
The border of the range input in Firefox.
We can't see how everything was set in Edge, but the computed values for border-style and border-width are none and 0 respectively. The border-color changes when we change the color property, which means that, just like in the other browsers, it's set to currentcolor.
The border of the range input in Edge.
The padding is 0 in both Chrome and Edge.
The padding of the range input, comparative look at Chrome (top) and Edge (bottom).
However, if we want a pixel-perfect result, then we need to set it explicitly because it's set to 1px in Firefox.
The padding of the range input in Firefox.
Now let's take another detour and check the backgrounds before we try to make sense of the values for the dimensions. Here, we get that the computed value is transparent/ rgba(0, 0, 0, 0) in Edge and Firefox, but rgb(255,255,255) (solid white) in Chrome.
The background-color of the range input, comparative look at all three browsers (from top to bottom: Chrome, Firefox, Edge).
And... finally, let's look at the dimensions. I've saved this for last because here is where things start to get really messy.
Chrome and Edge both give us 129px for the computed value of the width. Unlike with previous properties, we can't see this being set anywhere in Chrome, which would normally lead me to believe it's something that depends either on the parent, stretching horizontally to fit as all block elements do (which is definitely not the case here) or on the children. There's also a -webkit-logical-width property taking the same 129px value in the Computed panel. I was a bit confused by this at first, but it turns out it's the writing-mode relative equivalent - in other words, it's the width for horizontal writing-mode and the height for vertical writing-mode.
Changing the font-size of the range input in Chrome doesn't change its width value.
In any event, it doesn't depend on the font-size of the input itself or of that of the root element nor on the viewport dimensions in either browser.
Changing the font-size of the range input in Edge doesn't change its width value.
Firefox is the odd one out here, returning a computed value of 160px for the default width. This computed value does however depend on the font-size of the range input - it seems to be 12em.
Changing the font-size of the range input in Firefox also changes its width value.
In the case of the height, Chrome and Edge again both agree, giving us a computed value of 21px. Just like for the width, I cannot see this being set anywhere in the user agent stylesheet in Chrome DevTools, which normally happens when the height of an element depends on its content.
Changing the font-size of the range input in Chrome doesn't change its height value.
This value also doesn't depend on the font-size in either browser.
Changing the font-size of the range input in Edge doesn't change its height value.
Firefox is once again different, giving us 17.3333px as the computed value and, again, this depends on the input's font-size - it's 1.3em.
Changing the font-size of the range input in Firefox also changes its height value.
But this isn't worse than the margin case, right? Well, so far, it isn't! But that's just about to change because we're now moving on to the track component.
The range track component
There's one more possibility regarding the actual input dimensions that we haven't yet considered: that they're influenced by those of its components. So let's explicitly set some dimensions on the track and see whether that influences the size of the slider.
Apparently, in this situation, nothing changes for the actual slider in the case of the width, but we can spot more inconsistencies when it comes to the track width, which, by default, stretches to fill the content-box of the parent input in all three browsers.
In Firefox, if we explicitly set a width, any width on the track, then the track takes this width we give it, expanding outside of its parent slider or shrinking inside, but always staying middle aligned with it. Not bad at all, but, sadly, it turns out Firefox is the only browser that behaves in a sane manner here.
Explicitly setting a width on the track changes the width of the track in Firefox, but not that of the parent slider.
In Chrome, the track width we set is completely ignored and it looks like there's no sane way of making it have a value that doesn't depend on that of the parent slider.
Changing the width of the track doesn't do anything in Chrome (computed value remains 129px).
As for insane ways, using transform: scaleX(factor) seems to be the only way to make the track wider or narrower than its parent slider. Do note doing this also causes quite a few side effects. The thumb is scaled horizontally as well and its motion is limited to the scaled down track in Chrome and Edge (as the thumb is a child of the track in these browsers), but not in Firefox, where its size is preserved and its motion is still limited to the input, not the scaled down track (since the track and thumb are siblings here). Any lateral padding, border or margin on the track is also going to be scaled.
Moving on to Edge, the track again takes any width we set.
Edge also allows us to set a track width that's different from that of the parent slider.
This is not the same situation as Firefox however. While setting a width greater than that of the parent slider on the track makes it expand outside, the two are not middle aligned. Instead, the left border limit of the track is left aligned with the left content limit of its range input parent. This alignment inconsistency on its own wouldn't be that much of a problem - a margin-left set only on ::-ms-track could fix it.
However, everything outside of the parent slider's content-box gets cut out in Edge. This is not equivalent to having overflow set to hidden on the actual input, which would cut out everything outside the padding-box, not content-box. Therefore, it cannot be fixed by setting overflow: visible on the slider.
This clipping is caused by the elements between the input and the track having overflow: hidden, but, since we cannot access these, we also cannot fix this problem. Setting everything such that no component (including its box-shadow) goes outside the content-box of the range is an option in some cases, but not always.
For the height, Firefox behaves in a similar manner it did for the width. The track expands or shrinks vertically to the height we set without affecting the parent slider and always staying middle aligned to it vertically.
Explicitly setting a height on the track changes the height of the track in Firefox, but not that of the parent slider.
The default value for this height with no styles set on the actual input or track is .2em.
Changing the font-size on the track changes its computed height in Firefox.
Unlike in the case of the width, Chrome allows the track to take the height we set and, if we're not using a % value here, it also makes the content-box of the parent slider expand or shrink such that the border-box of the track perfectly fits in it. When using a % value, the actual slider and the track are middle aligned vertically.
Explicitly setting a height on the track in % changes the height of the track in Chrome, but not that of the parent slider. Using other units, the actual range input expands or shrinks vertically such that the track perfectly fits inside.
The computed value we get for the height without setting any custom styles is the same as for the slider and doesn't change with the font-size.
Changing the font-size on the track doesn't change its computed height in Chrome.
What about Edge? Well, we can change the height of the track independently of that of the parent slider and they both stay middle aligned vertically, but all of this is only as long as the track height we set is smaller than the initial height of the actual input. Above that, the track's computed height is always equal to that of the parent range.
Explicitly setting a height on the track in Edge doesn't change the height of the parent slider and the two are middle aligned. However, the height of the track is limited by that of the actual input.
The initial track height is 11px and this value doesn't depend on the font-size or on the viewport.
Changing the font-size on the track doesn't change its computed height in Edge.
Moving on to something less mindbending, we have box-sizing. This is border-box in Chrome and content-box in Edge and Firefox so, if we're going to have a non-zero border or padding, then box-sizing is a property we need to explicitly set in order to even things out.
The box-sizing of the track, comparative look at all three browsers (from top to bottom: Chrome, Firefox, Edge).
The default track margin and padding are both 0 in all three browsers - finally, an oasis of consistency!
The box-sizing of the track, comparative look at all three browsers (from top to bottom: Chrome, Firefox, Edge).
The values for the color property can be inherited from the parent slider in all three browsers.
The color of the track, comparative look at Chrome (top) and Firefox (bottom).
Even so, Edge is the odd one here, changing it to white, though setting it to initial changes it to black, which is the value we have for the actual input.
Resetting the color to initial in Edge.
Setting -webkit-appearance: none on the actual input in Edge makes the computed value of the color on the track transparent (if we haven't explicitly set a color value ourselves). Also, once we add a background on the track, the computed track color suddenly changes to black.
Unexpected consequence of adding a background track in Edge.
To a certain extent, the ability to inherit the color property is useful for theming, though inheriting custom properties can do a lot more here. For example, consider we want to use a silver for secondary things and an orange for what we want highlighted. We can define two CSS variables on the body and then use them across the page, even inside our range inputs.
body { --fading: #bbb; --impact: #f90 } h2 { border-bottom: solid .125em var(--impact) } h6 { color: var(--fading) } [type='range']:focus { box-shadow: 0 0 2px var(--impact) } @mixin track() { background: var(--fading) } @mixin thumb() { background: var(--impact) }
Sadly, while this works in Chrome and Firefox, Edge doesn't currently allow custom properties on the range inputto be inherited down to its components.
Expected result (left) vs. result in Edge (right), where no track or thumb show up (live demo).
By default, there is no border on the track in Chrome or Firefox (border-width is 0 and border-style is none).
The border of the track, comparative look at Chrome (top) and Firefox (bottom).
Edge has no border on the track if we have no background set on the actual input and no background set on the track itself. However, once that changes, we get a thin (1px) black track border.
Another unexpected consequence of adding a track or parent slider background in Edge.
The default background-color is shown to be inherited as white, but then somehow we get a computed value of rgba(0,0,0,0) (transparent) in Chrome (both before and after -webkit-appearance: none). This also makes me wonder how come we can see the track before, since there's no background-color or background-image to give us anything visible. Firefox gives us a computed value of rgb(153,153,153) (#999) and Edge transparent (even though we might initially think it's some kind of silver, that is not the background of the ::-ms-track element - more on that a bit later).
The background-color of the track, comparative look at all three browsers (from top to bottom: Chrome, Firefox, Edge).
The range thumb component
Ready for the most annoying inconsistency yet? The thumb moves within the limits of the track's content-box in Chrome and within the limits of the actual input's content-box in Firefox and Edge, even when we make the track longer or shorter than the input (Chrome doesn't allow this, forcing the track's border-box to fit the slider's content-box horizontally).
The way Chrome behaves is illustrated below:
Recording of the thumb motion in Chrome from one end of the slider to the other.
The padding is transparent, while the content-box and the border are semitransparent. We've used orange for the actual slider, red for the track and purple for the thumb.
For Firefox, things are a bit different:
Recording of the thumb motion in Firefox from one end of the slider to the other (the three cases from top to bottom: the border-box of the track perfectly fits the content-box of the slider horizontally, it's longer and it's shorter).
In Chrome, the thumb is the child of the track, while in Firefox it's its sibling, so, looking at it this way, it makes sense that Chrome would move the thumb within the limits of the track's content-box and Firefox would move it within the limits of the slider's content-box. However, the thumb is inside the track in Edge too and it still moves within the limits of the slider's content-box.
Recording of the thumb motion in Edge from one end of the slider to the other (the three cases from top to bottom: the border-box of the track perfectly fits the content-box of the slider horizontally, it's longer and it's shorter).
While this looks very strange at first, it's because Edge forces the position of the track to static and we cannot change that, even if we set it to relative with !important.
Trying (and failing) to change the value of the position property on the track in Edge.
This means we may style our slider exactly the same for all browsers, but if its content-box doesn't coincide to that of its track horizontally (so if we have a non-zero lateral padding or border on the track), it won't move within the same limits in all browsers.
Furthermore, if we scale the track horizontally, then Chrome and Firefox behave as they did before, the thumb moving within the limits of the now scaled track's content-box in Chrome and within the limits of the actual input's content-box in Firefox. However, Edge makes the thumb move within an interval whose width equals that of the track's border-box, but starts from the left limit of the track's padding-box, which is probably explained by the fact that the transform property creates a stacking context.
Recording of the thumb motion in Edge when the track is scaled horizontally.
Vertically, the thumb is middle-aligned to the track in Firefox, seemingly middle-aligned in Edge, though I've been getting very confusing different results over multiple tests of the same situation, and the top of its border-box is aligned to the top of the track's content-box in Chrome once we've set -webkit-appearance: none on the actual input and on the thumb so that we can style the slider.
While the Chrome decision seems weird at first, is annoying in most cases and lately has even contributed to breaking things in... Edge (but more about that in a moment), there is some logic behind it. By default, the height of the track in Chrome is determined by that of the thumb and if we look at things this way, the top alignment doesn't seem like complete insanity anymore.
However, we often want a thumb that's bigger than the track's height and is middle aligned to the track. We can correct the Chrome alignment with margin-top in the styles we set on the ::-webkit-slider-thumb pseudo.
Unfortunately, this way we're breaking the vertical alignment in Edge. This is because Edge now applies the styles set via ::-webkit-slider-thumb as well. At least we have the option of resetting margin-top to 0 in the styles we set on ::-ms-thumb. The demo below shows a very simple example of this in action.
See the Pen by thebabydino (@thebabydino) on CodePen.
Just like in the case of the track, the value of the box-sizing property is border-box in Chrome and content-box in Edge and Firefox, so, for consistent results across browsers, we need to set it explicitly if we want to have a non-zero border or padding on the thumb.
The margin and padding are both 0 by default in all three browsers.
After setting -webkit-appearance: none on both the slider and the thumb (setting it on just one of the two doesn't change anything), the dimensions of the thumb are reset from 10x21 (dimensions that don't depend on the font-size) to 129x0 in Chrome. The height of the track and actual slider also get reset to 0, since they depend on that of their content (the thumb inside, whose height has become 0).
The thumb box model in Chrome.
This is also why explicitly setting a height on the thumb makes the track take the same height.
According to Chrome DevTools, there is no border in either case, even though, before setting -webkit-appearance: none, it sure looks like there is one.
How the slider looks in Chrome before setting -webkit-appearance: none.
If that's not a border, it might be an outline or a box-shadow with no blur and a positive spread. But, according to Chrome DevTools, we don't have an outline, nor box-shadow on the thumb.
Computed values for outline and box-shadow in Chrome DevTools.
Setting -webkit-appearance: none in Edge makes the thumb dimensions go from 11x11 (values that don't depend on the font-size) to 0x0. Explicitly setting a height on the thumb makes the track take the initial height (11px).
The thumb box model in Edge.
In Edge, there's initially no border on the thumb. However, after setting a background on either the actual range input or any of its components, we suddenly get a solid 1px white lateral one (left and right, but not top and bottom), which visually turns to black in the :active state (even though Edge DevTools doesn't seem to notice that). Setting -webkit-appearance: none removes the border-width.
The thumb border in Edge.
In Firefox, without setting a property like background on the range input or its components, the dimensions of the thumb are 1.666x3.333 and, in this case, they don't change with the font-size. However, if we set something like background: transparent on the slider (or any background value on its components), then both the width and height of the thumb become 1em.
The thumb box model in Firefox.
In Firefox, if we are to believe what we see in DevTools, we initially have a solid thick grey (rgb(153, 153, 153)) border.
The thumb border in Firefox DevTools.
Visually however, I can't spot this thick grey border anywhere.
How the slider looks initially in Firefox, before setting a background on it or on any of its components.
After setting a background on the actual range input or one of its components, the thumb border actually becomes visually detectable and it seems to be .1em.
The thumb border in Firefox.
In Chrome and in Edge, the border-radius is always 0.
The thumb border-radius in Chrome (top) and Edge (bottom).
In Firefox however, we have a .5em value for this property, both before and after setting a background on the range input or on its components, even though the initial shape of the thumb doesn't look like a rectangle with rounded corners.
The thumb border-radius in Firefox.
The strange initial shape of the thumb in Firefox has made me wonder whether it doesn't have a clip-path set, but that's not the case according to DevTools.
The thumb clip-path in Firefox.
More likely, the thumb shape is due to the -moz-field setting, though, at least on Windows 10, this doesn't make it look like every other slider.
Initial appearance of slider in Firefox vs. appearance of a native Windows 10 slider.
The thumb's background-color is reported as being rgba(0, 0, 0, 0) (transparent) by Chrome DevTools, even though it looks grey before setting -webkit-appearance: none. We also don't seem to have a background-image that could explain the gradient or the lines on the thumb before setting -webkit-appearance: none. Firefox DevTools reports it as being rgb(240, 240, 240), even though it looks blue as long as we don't have a background explicitly set on the actual range input or on any of its components.
The thumb background-color in Chrome (top) and Firefox (bottom).
In Edge, the background-color is rgb(33, 33, 33) before setting -webkit-appearance: none and transparent after.
The thumb background-color in Edge.
The range progress (fill) component
We only have dedicated pseudo-elements for this in Firefox (::-moz-range-progress) and in Edge (::-ms-fill-lower). Note that this element is a sibling of the track in Firefox and a descendant in Edge. This means that it's sized relative to the actual input in Firefox, but relative to the track in Edge.
In order to better understand this, consider that the track's border-box perfectly fits horizontally within the slider's content-box and that the track has both a border and a padding.
In Firefox, the left limit of the border-box of the progress component always coincides with the left limit of the slider's content-box. When the current slider value is its minimum value, the right limit of the border-box of our progress also coincides with the left limit of the slider's content-box. When the current slider value is its maximum value, the right limit of the border-box of our progress coincides with the right limit of the slider's content-box.
This means the width of the border-box of our progress goes from 0 to the width of the slider's content-box. In general, when the thumb is at x% of the distance between the two limit value, the width of the border-box for our progress is x% of that of the slider's content-box.
This is shown in the recording below. The padding area is always transparent, while the border area and content-box are semitransparent (orange for the actual input, red for the track, grey for the progress and purple for the thumb).
How the width of the ::-moz-range-progress component changes in Firefox.
In Edge however, the left limit of the fill's border-box always coincides with the left limit of the track's content-box while the right limit of the fill's border-box always coincides with the vertical line that splits the thumb's border-box into two equal halves. This means that when the current slider value is its minimum value, the right limit of the fill's border-box is half the thumb's border-box to the right of the left limit of the track's content-box. And when the current slider value is its maximum value, the right limit of the fill's border-box is half the thumb's border-box to the left of the right limit of the track's content-box.
This means the width of the border-box of our progress goes from half the width of the thumb's border-box minus the track's left border and padding to the width of the track's content-box plus the track's right padding and border minus half the width of the thumb's border-box. In general, when the thumb is at x% of the distance between the two limit value, the width of the border-box for our progress is its minimum width plus x% of the difference between its maximum and its minimum width.
This is all illustrated by the following recording of this live demo you can play with:
How the width of the ::-ms-fill-lower component changes in Edge.
While the description of the Edge approach above might make it seem more complicated, I've come to the conclusion that this is the best way to vary the width of this component as the Firefox approach may cause some issues.
For example, consider the case when we have no border or padding on the track for cross browser consistency and the height of the both the fill's and thumb's border-box equal to that of the track. Furthermore, the thumb is a disc (border-radius: 50%).
In Edge, all is fine:
How our example works in Edge.
But in Firefox, things look awkward (live demo):
How our example works in Firefox.
The good news is that we don't have other annoying and hard to get around inconsistencies in the case of this component.
box-sizing has the same computed value in both browsers - content-box.
The computed value for box-sizing in the case of the progress (fill) component: Firefox (top) and Edge (bottom).
In Firefox, the height of the progress is .2em, while the padding, border and margin are all 0.
The height of the progress in Firefox.
In Edge, the fill's height is equal to that of the track's content-box, with the padding, border and margin all being 0, just like in Firefox.
The height of the fill in Edge.
Initially, the background of this element is rgba(0, 0, 0, 0) (transparent, which is why we don't see it at first) in Firefox and rgb(0, 120, 115) in Edge.
The background-color of the progress (fill) in Firefox (top) and Edge (bottom).
In both cases, the computed value of the color property is rgb(0, 0, 0) (solid black).
The computed value for color in the case of the progress (fill) component: Firefox (top) and Edge (bottom).
WebKit browsers don't provide such a component and, since we don't have a way of accessing and using a track's ::before or ::after pseudos anymore, our only option of emulating this remains layering an extra, non-repeating background on top of the track's existing one for these browsers and making the size of this extra layer along the x axis depend depend on the current value of the range input.
The simplest way of doing this nowadays is by using a current value --val CSS variable, which holds the slider's current value. We update this variable every time the slider's value changes and we make the background-size of this top layer a calc() value depending on --val. This way, we don't have to recompute anything when the value of the range input changes - our calc() value is dynamic, so updating the --val variable is enough (not just for this background-size, but also for other styles that may depend on it as well).
See the Pen by thebabydino (@thebabydino) on CodePen.
Also doing this for Firefox is an option if the way ::-moz-range-progress increases doesn't look good for our particular use case.
Edge also provides a ::-ms-fill-upper which is basically the complementary of the lower one and it's the silver background of this pseudo-element that we initially see to the right of the thumb, not that of the track (the track is transparent).
Tick marks and labels
Edge is the only browser that shows tick marks by default. They're shown on the track, delimiting two, five, ten, twenty sections, the exact number depending initially on the track width. The only style we can change for these tick marks is the color property as this is inherited from the track (so setting color: transparent on the track removes the initial tick marks in Edge).
The structure that generates the initial tick marks on the track in Edge.
The spec says that tick marks and labels can be added by linking a datalist element, for whose option children we may specify a label attribute if we want that particular tick mark to also have a label.
Unfortunately, though not at all surprising anymore at this point, browsers have a mind of their own here too. Firefox doesn't show anything - no tick marks, no labels. Chrome shows the tick marks, but only allows us to control their position along the slider with the option values. It doesn't allow us to style them in any way and it doesn't show any labels.
Tick marks in Chrome.
Also, setting -webkit-appearance: none on the actual slider (which is something that we need to to in order to be able to style it) makes these tick marks disappear.
Edge joins the club and doesn't show any labels either and it doesn't allow much control over the look of the ticks either. While adding the datalist allows us to control which tick marks are shown where on the track, we cannot style them beyond changing the color property on the track component.
Tick marks in Edge.
In Edge, we also have ::-ms-ticks-before and ::-ms-ticks-after pseudo-elements. These are pretty much what they sound like - tick marks before and after the track. However, I'm having a hard time understanding how they really work.
They're hidden by display: none, so changing this property to block makes them visible if we also explicitly set a slider height, even though doing this does not change their own height.
How to make tick marks crested by ::-ms-ticks-after visible in Edge.
Beyond that, we can set properties like margin, padding, height, background, color in order to control their look. However, I have no idea how to control the thickness of individual ticks, how to give individual ticks gradient backgrounds or how to make some of them major and some minor.
So, at the end of the day, our best option if we want a nice cross-browser result remains using repeating-linear-gradient for the ticks and the label element for the values corresponding to these ticks.
See the Pen by thebabydino (@thebabydino) on CodePen.
Tooltip/ current value display
Edge is the only browser that provides a tooltip via ::-ms-tooltip, but this doesn't show up in the DOM, cannot really be styled (we can only choose to hide it by setting display: none on it) and can only display integer values, so it's completely useless for a range input between let's say .1 and .4 - all the values it displays are 0!
::-ms-tooltip when range limits are both subunitary.
So our best bet is to just hide this and use the output element for all browsers, again taking advantage of the possibility of storing the current slider value into a --val variable and then using a calc() value depending on this variable for the position.
See the Pen by thebabydino (@thebabydino) on CodePen.
Orientation
The good news is that every browser allows us to create vertical sliders. The bad news is, as you may have guessed... every browser provides a different way of doing this, none of which is the one presented in the spec (setting a width smaller than the height on the range input). WebKit browsers have opted for -webkit-appearance: slider-vertical, Edge for writing-mode: bt-lr, while Firefox controls this via an orient attribute with a value of 'vertical'.
The really bad news is that, for WebKit browsers, making a slider vertical this way leaves us unable to set any custom styles on it (as setting custom styles requires a value of none for -webkit-appearance).
Our best option is to just style our range input as a horizontal one and then rotate it with a CSS transform.
See the Pen by thebabydino (@thebabydino) on CodePen.
A Sliding Nightmare: Understanding the Range Input is a post from CSS-Tricks
http://j.mp/2CjxZ1w via CSS-Tricks URL : http://j.mp/2bNbLYg
0 notes
iyarpage · 6 years
Text
Swizzling in iOS 11 with UIDebuggingInformationOverlay
This is an abridged chapter from our book Advanced Apple Debugging & Reverse Engineering, which has been completely updated for Xcode 9.1 and iOS 11. Enjoy!
In this tutorial, you’ll go after a series of private UIKit classes that help aid in visual debugging. The chief of these private classes, UIDebuggingInformationOverlay was introduced in iOS 9.0 and has received widespread attention in May 2017, thanks to an article http://ift.tt/2rYzDhc highlighting these classes and usage.
Unfortunately, as of iOS 11, Apple caught wind of developers accessing this class (likely through the popularity of the above article) and has added several checks to ensure that only internal apps that link to UIKit have access to these private debugging classes.
You’ll explore UIDebuggingInformationOverlay and learn why this class fails to work in iOS 11, as well as explore avenues to get around these checks imposed by Apple by writing to specific areas in memory first through LLDB. Then, you’ll learn alternative tactics you can use to enable UIDebuggingInformationOverlay through Objective-C’s method swizzling.
I specifically require you to use an iOS 11 Simulator for this tutorial as Apple can impose new checks on these classes in the future where I have no intention to “up the ante” when they make this class harder to use or remove it from release UIKit builds altogether.
Between iOS 10 and 11
In iOS 9 & 10, setting up and displaying the overlay was rather trivial. In both these iOS versions, the following LLDB commands were all that was needed:
(lldb) po [UIDebuggingInformationOverlay prepareDebuggingOverlay] (lldb) po [[UIDebuggingInformationOverlay overlay] toggleVisibility]
This would produce the following overlay:
If you have an iOS 10 Simulator on your computer, I’d recommend you attach to any iOS process and try the above LLDB commands out so you know what is expected.
Unfortunately, some things changed in iOS 11. Executing the exact same LLDB commands in iOS 11 will produce nothing.
To understand what’s happening, you need to explore the overridden methods UIDebuggingInformationOverlay contains and wade into the assembly.
Use LLDB to attach to any iOS 11.x Simulator process, this can MobileSafari, SpringBoard, or your own work. It doesn’t matter if it’s your own app or not, as you will be exploring assembly in the UIKit module.
For this example, I’ll launch the Photos application in the Simulator. Head on over to Terminal, then type the following:
(lldb) lldb -n MobileSlideShow
Once you’ve attached to any iOS Simulator process, use LLDB to search for any overridden methods by the UIDebuggingInformationOverlay class.
You can use the image lookup LLDB command:
(lldb) image lookup -rn UIDebuggingInformationOverlay
Or alternatively, you can use the methods command you create in Chapter 14 of the book, “Dynamic Frameworks”:
(lldb) methods UIDebuggingInformationOverlay
The following command would be equivalent to that:
(lldb) exp -lobjc -O -- [UIDebuggingInformationOverlay _shortMethodDescription]
Take note of the overridden init instance method found in the output of either command.
You’ll need to explore what this init is doing. You can follow along with LLDB’s disassemble command, but for visual clarity, I’ll use my own custom LLDB disassembler, dd, which outputs in color and is available here: http://ift.tt/2qRRhWC.
Here’s the init method’s assembly in iOS 10. If you want to follow along in black & white in LLDB, type:
(lldb) disassemble -n "-[UIDebuggingInformationOverlay init]"
Again, this is showing the assembly of this method in iOS 10.
Colors (and dd‘s comments marked in green) make reading x64 assembly soooooooooooo much easier. In pseudo-Objective-C code, this translates to the following:
@implementation UIDebuggingInformationOverlay - (instancetype)init { if (self = [super init]) { [self _setWindowControlsStatusBarOrientation:NO]; } return self; } @end
Nice and simple for iOS 10. Let’s look at the same method for iOS 11:
This roughly translates to the following:
@implementation UIDebuggingInformationOverlay - (instancetype)init { static BOOL overlayEnabled = NO; static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ overlayEnabled = UIDebuggingOverlayIsEnabled(); }); if (!overlayEnabled) { return nil; } if (self = [super init]) { [self _setWindowControlsStatusBarOrientation:NO]; } return self; } @end
There are checks enforced in iOS 11 thanks to UIDebuggingOverlayIsEnabled() to return nil if this code is not an internal Apple device.
You can verify these disappointing precautions yourself by typing the following in LLDB on a iOS 11 Simulator:
(lldb) po [UIDebuggingInformationOverlay new]
This is a shorthand way of alloc/init‘ing an UIDebuggingInformationOverlay. You’ll get nil.
With LLDB, disassemble the first 10 lines of assembly for -[UIDebuggingInformationOverlay init]:
(lldb) disassemble -n "-[UIDebuggingInformationOverlay init]" -c10
Your assembly won’t be color coded, but this is a small enough chunk to understand what’s going on.
Your output will look similar to:
UIKit`-[UIDebuggingInformationOverlay init]: 0x10d80023e <+0>: push rbp 0x10d80023f <+1>: mov rbp, rsp 0x10d800242 <+4>: push r14 0x10d800244 <+6>: push rbx 0x10d800245 <+7>: sub rsp, 0x10 0x10d800249 <+11>: mov rbx, rdi 0x10d80024c <+14>: cmp qword ptr [rip + 0x9fae84], -0x1 ; UIDebuggingOverlayIsEnabled.__overlayIsEnabled + 7 0x10d800254 <+22>: jne 0x10d8002c0 ; <+130> 0x10d800256 <+24>: cmp byte ptr [rip + 0x9fae73], 0x0 ; mainHandler.onceToken + 7 0x10d80025d <+31>: je 0x10d8002a8 ; <+106>
Pay close attention to offset 14 and 22:
0x10d80024c <+14>: cmp qword ptr [rip + 0x9fae84], -0x1 ; UIDebuggingOverlayIsEnabled.__overlayIsEnabled + 7 0x10d800254 <+22>: jne 0x10d8002c0 ; <+130>
Thankfully, Apple includes the DWARF debugging information with their frameworks, so we can see what symbols they are using to access certain memory addresses.
Take note of the UIDebuggingOverlayIsEnabled.__overlayIsEnabled + 7 comment in the disassembly. I actually find it rather annoying that LLDB does this and would consider this a bug. Instead of correctly referencing a symbol in memory, LLDB will reference the previous value in its comments and add a + 7. The value at UIDebuggingOverlayIsEnabled.__overlayIsEnabled + 7 is what we want, but the comment is not helpful, because it has the name of the wrong symbol in its disassembly. This is why I often choose to use my dd command over LLDB’s, since I check for this off-by one error and replace it with my own comment.
But regardless of the incorrect name LLDB is choosing in its comments, this address is being compared to -1 (aka 0xffffffffffffffff in a 64-bit process) and jumps to a specific address if this address doesn’t contain -1. Oh… and now that we’re on the subject, dispatch_once_t variables start out as 0 (as they are likely static) and get set to -1 once a dispatch_once block completes (hint, hint).
Yes, this first check in memory is seeing if code should be executed in a dispatch_once block. You want the dispatch_once logic to be skipped, so you’ll set this value in memory to -1.
From the assembly above, you have two options to obtain the memory address of interest:
You can combine the RIP instruction pointer with the offset to get the load address. In my assembly, I can see this address is located at [rip + 0x9fae84]. Remember, the RIP register will resolve to the next row of assembly since the program counter increments, then executes an instruction.
This means that [rip + 0x9fae84] will resolve to [0x10d800254 + 0x9fae84] in my case. This will then resolve to 0x000000010e1fb0d8, the memory address guarding the overlay from being initialized.
You can use LLDB’s image lookup command with the verbose and symbol option to find the load address for UIDebuggingOverlayIsEnabled.__overlayIsEnabled.
(lldb) image lookup -vs UIDebuggingOverlayIsEnabled.__overlayIsEnabled
From the output, look for the range field for the end address. Again, this is due to LLDB not giving you the correct symbol. For my process, I got range = [0x000000010e1fb0d0-0x000000010e1fb0d8). This means the byte of interest for me is located at: 0x000000010e1fb0d8. If I wanted to know the symbol this address is actually referring to, I can type:
(lldb) image lookup -a 0x000000010e1fb0d8
Which will then output:
Address: UIKit[0x00000000015b00d8] (UIKit.__DATA.__bss + 24824) Summary: UIKit`UIDebuggingOverlayIsEnabled.onceToken
This UIDebuggingOverlayIsEnabled.onceToken is the correct name of the symbol you want to go after.
Bypassing Checks by Changing Memory
We now know the exact bytes where this Boolean check occurs.
Let’s first see what value this has:
(lldb) x/gx 0x000000010e1fb0d8
This will dump out 8 bytes in hex located at 0x000000010e1fb0d8 (your address will be different). If you’ve executed the po [UIDebuggingInformationOverlay new] command earlier, you’ll see -1; if you haven’t, you’ll see 0.
Let’s change this. In LLDB type:
(lldb) mem write 0x000000010e1fb0d8 0xffffffffffffffff -s 8
The -s option specifies the amount of bytes to write to. If typing out 16 f’s is unappealing to you, there’s always alternatives to complete the same task. For example, the following would be equivalent:
(lldb) po *(long *)0x000000010e1fb0d0 = -1
You can of course verify your work be just examining the memory again.
(lldb) x/gx 0x000000010e1fb0d8
The output should be 0xffffffffffffffff now.
Your Turn
I just showed you how to knock out the initial check for UIDebuggingOverlayIsEnabled.onceToken to make the dispatch_once block think it has already run, but there’s one more check that will hinder your process.
Re-run the disassemble command you typed earlier:
(lldb) disassemble -n "-[UIDebuggingInformationOverlay init]" -c10
At the very bottom of output are these two lines:
0x10d800256 <+24>: cmp byte ptr [rip + 0x9fae73], 0x0 ; mainHandler.onceToken + 7 0x10d80025d <+31>: je 0x10d8002a8 ; <+106>
This mainHandler.onceToken is again, the wrong symbol; you care about the symbol immediately following it in memory. I want you to perform the same actions you did on UIDebuggingOverlayIsEnabled.__overlayIsEnabled, but instead apply it to the memory address pointed to by the mainHandler.onceToken symbol. Once you perform the RIP arithmetic, referencing mainHandler.onceToken, you’ll realize the correct symbol, UIDebuggingOverlayIsEnabled.__overlayIsEnabled, is the symbol you are after.
You first need to the find the location of mainHandler.onceToken in memory. You can either perform the RIP arithmetic from the above assembly or use image lookup -vs mainHandler.onceToken to find the end location. Once you found the memory address, write a -1 value into this memory address.
Verifying Your Work
Now that you’ve successfully written a -1 value to mainHandler.onceToken, it’s time to check your work to see if any changes you’ve made have bypassed the initialization checks.
In LLDB type:
(lldb) po [UIDebuggingInformationOverlay new]
Provided you correctly augmented the memory, you’ll be greeted with some more cheery output:
<UIDebuggingInformationOverlay: 0x7fb622107860; frame = (0 0; 768 1024); hidden = YES; gestureRecognizers = <NSArray: 0x60400005aac0>; layer = <UIWindowLayer: 0x6040000298a0>>
And while you’re at it, make sure the class method overlay returns a valid instance:
(lldb) po [UIDebuggingInformationOverlay overlay]
If you got nil for either of the above LLDB commands, make sure you have augmented the correct addresses in memory. If you’re absolutely sure you have augmented the correct addresses and you still get a nil return value, make sure you’re running either the iOS 11.0-11.1 Simulator as Apple could have added additional checks to prevent this from working in a version since this tutorial was written!
If all goes well, and you have a valid instance, let’s put this thing on the screen!
In LLDB, type:
(lldb) po [[UIDebuggingInformationOverlay overlay] toggleVisibility]
Then resume the process:
(lldb) continue
Alright… we got something on the screen, but it’s blank!?
Sidestepping Checks in prepareDebuggingOverlay
The UIDebuggingInformationOverlay is blank because we didn’t call the class method, +[UIDebuggingInformationOverlay prepareDebuggingOverlay]
Dumping the assembly for this method, we can see one concerning check immediately:
Offsets 14, 19, and 21. Call a function named _UIGetDebuggingOverlayEnabled test if AL (RAX‘s single byte cousin) is 0. If yes, jump to the end of this function. The logic in this function is gated by the return value of _UIGetDebuggingOverlayEnabled.
Since we are still using LLDB to build a POC, let’s set a breakpoint on this function, step out of _UIGetDebuggingOverlayEnabled, then augment the value stored in the AL register before the check in offset 19 occurs.
Create a breakpoint on _UIGetDebuggingOverlayEnabled:
(lldb) b _UIGetDebuggingOverlayEnabled
LLDB will indicate that it’s successfully created a breakpoint on the _UIGetDebuggingOverlayEnabled method.
Now, let’s execute the [UIDebuggingInformationOverlay prepareDebuggingOverlay] method, but have LLDB honor breakpoints. Type the following:
(lldb) exp -i0 -O -- [UIDebuggingInformationOverlay prepareDebuggingOverlay]
This uses the -i option that determines if LLDB should ignore breakpoints. You’re specifying 0 to say that LLDB shouldn’t ignore any breakpoints.
Provided all went well, execution will start in the prepareDebuggingOverlay method and call out to the _UIGetDebuggingOverlayEnabled where execution will stop.
Let’s just tell LLDB to resume execution until it steps out of this _UIGetDebuggingOverlayEnabled function:
(lldb) finish
Control flow will finish up in _UIGetDebuggingOverlayEnabled and we’ll be back in the prepareDebuggingOverlay method, right before the test of the AL register on offset 19:
UIKit`+[UIDebuggingInformationOverlay prepareDebuggingOverlay]: 0x11191a312 <+0>: push rbp 0x11191a313 <+1>: mov rbp, rsp 0x11191a316 <+4>: push r15 0x11191a318 <+6>: push r14 0x11191a31a <+8>: push r13 0x11191a31c <+10>: push r12 0x11191a31e <+12>: push rbx 0x11191a31f <+13>: push rax 0x11191a320 <+14>: call 0x11191b2bf ; _UIGetDebuggingOverlayEnabled -> 0x11191a325 <+19>: test al, al 0x11191a327 <+21>: je 0x11191a430 ; <+286> 0x11191a32d <+27>: lea rax, [rip + 0x9fc19c] ; UIApp
Through LLDB, print out the value in the AL register:
(lldb) p/x $al
Unless you work at a specific fruit company inside a fancy new “spaceship” campus, you’ll likely get 0x00.
Change this around to 0xff:
(lldb) po $al = 0xff
Let’s verify this worked by single instruction stepping:
(lldb) si
This will get you onto the following line:
je 0x11191a430 ; <+286>
If AL was 0x0 at the time of the test assembly instruction, this will move you to offset 286. If AL wasn’t 0x0 at the time of the test instruction, you’ll keep on executing without the conditional jmp instruction.
Make sure this succeeded by performing one more instruction step.
(lldb) si
If you’re on offset 286, this has failed and you’ll need to repeat the process. However, if you find the instruction pointer has not conditionally jumped, then this has worked!
There’s nothing more you need to do now, so resume execution in LLDB:
(lldb) continue
So, what did the logic do exactly in +[UIDebuggingInformationOverlay prepareDebuggingOverlay]?
To help ease the visual burden, here is a rough translation of what the +[UIDebuggingInformationOverlay prepareDebuggingOverlay] method is doing:
+ (void)prepareDebuggingOverlay { if (_UIGetDebuggingOverlayEnabled()) { id handler = [UIDebuggingInformationOverlayInvokeGestureHandler mainHandler]; UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:handler action:@selector(_handleActivationGesture:)]; [tapGesture setNumberOfTouchesRequired:2]; [tapGesture setNumberOfTapsRequired:1]; [tapGesture setDelegate:handler]; UIView *statusBarWindow = [UIApp statusBarWindow]; [statusBarWindow addGestureRecognizer:tapGesture]; } }
This is interesting: There is logic to handle a two finger tap on UIApp’s statusBarWindow. Once that happens, a method called _handleActivationGesture: will be executed on a UIDebuggingInformationOverlayInvokeGestureHandler singleton, mainHandler.
That makes you wonder what’s the logic in -[UIDebuggingInformationOverlayInvokeGestureHandler _handleActivationGesture:] is for?
A quick assembly dump using dd brings up an interesting area:
The UITapGestureRecognizer instance passed in by the RDI register, is getting the state compared to the value 0x3 (see offset 30). If it is 3, then control continues, while if it’s not 3, control jumps towards the end of the function.
A quick lookup in the header file for UIGestureRecognizer, tells us the state has the following enum values:
typedef NS_ENUM(NSInteger, UIGestureRecognizerState) { UIGestureRecognizerStatePossible, UIGestureRecognizerStateBegan, UIGestureRecognizerStateChanged, UIGestureRecognizerStateEnded, UIGestureRecognizerStateCancelled, UIGestureRecognizerStateFailed, UIGestureRecognizerStateRecognized = UIGestureRecognizerStateEnded };
Counting from 0, we can see control will only execute the bulk of the code if the UITapGestureRecognizer‘s state is equal to UIGestureRecognizerStateEnded.
So what does this mean exactly? Not only did UIKit developers put restrictions on accessing the UIDebuggingInformationOverlay class (which you’ve already modified in memory), they’ve also added a “secret” UITapGestureRecognizer to the status bar window that executes the setup logic only when you complete a two finger tap on it.
How cool is that?
So, Recapping…
Before we try this thing out, let’s quickly recap what you did just in case you need to restart fresh:
You found the memory address of UIDebuggingOverlayIsEnabled.onceToken:
(lldb) image lookup -vs UIDebuggingOverlayIsEnabled.onceToken
And then set it to -1 via LLDB’s memory write or just casting the address to a long pointer and setting the value to -1 like so:
(lldb) po *(long *)0x000000010e1fb0d0 = -1
You also performed the same action for UIDebuggingOverlayIsEnabled.__overlayIsEnabled.
You then created a breakpoint on _UIGetDebuggingOverlayEnabled(), executed the +[UIDebuggingInformationOverlay prepareDebuggingOverlay] command and changed the return value that _UIGetDebuggingOverlayEnabled() produced so the rest of the method could continue to execute.
This was one of the many ways to bypass Apple’s new iOS 11 checks to prevent you from using these classes.
Trying This Out
Since you’re using the Simulator, this means you need to hold down Option on the keyboard to simulate two touches. Once you get the two touches parallel, hold down the Shift key to drag the tap circles around the screen. Position the tap circles on the status bar of your application, and then click.
You’ll be greeted with the fully functional UIDebuggingInformationOverlay!
Introducing Method Swizzling
Reflecting, how long did that take? In addition, we have to manually set this through LLDB everytime UIKit gets loaded into a process. Finding and setting these values in memory can definitely be done through a custom LLDB script, but there’s an elegant alternative using Objective-C’s method swizzling.
But before diving into how, let’s talk about the what.
Method swizzling is the process of dynamically changing what an Objective-C method does at runtime. Compiled code in the __TEXT section of a binary can’t be modified (well, it can with the proper entitlements that Apple will not give you, but we won’t get into that). However, when executing Objective-C code, objc_msgSend comes into play. In case you forgot, objc_msgSend will take an instance (or class), a Selector and a variable number of arguments and jump to the location of the function.
Method swizzling has many uses, but oftentimes people use this tactic to modify a parameter or return value. Alternatively, they can snoop and see when a function is executing code without searching for references in assembly. In fact, Apple even (precariously) uses method swizzling in it’s own codebase like KVO!
Since the internet is full of great references on method swizzling, I won’t start at square one (but if you want to, I’d say http://ift.tt/NZjTWH has the clearest and cleanest discussion of it). Instead, we’ll start with the basic example, then quickly ramp up to something I haven’t seen anyone do with method swizzling: use it to jump into an offset of a method to avoid any unwanted checks!
Finally — Onto A Sample Project
Included in this tutorial is a sample project named Overlay, which you can download here. It’s quite minimal; it only has a UIButton smack in the middle that executes the expected logic to display the UIDebuggingInformationOverlay.
You’ll build an Objective-C NSObject category to perform the Objective-C swizzling on the code of interest as soon as the module loads, using the Objective-C-only load class method.
Build and run the project. Tap on the lovely UIButton. You’ll only get some angry output from stderr saying:
UIDebuggingInformationOverlay 'overlay' method returned nil
As you already know, this is because of the short-circuited overriden init method for UIDebuggingInformationOverlay.
Let’s knock out this easy swizzle first; open NSObject+UIDebuggingInformationOverlayInjector.m. Jump to Section 1, marked by a pragma. In this section, add the following Objective-C class:
//****************************************************/ #pragma mark - Section 1 - FakeWindowClass //****************************************************/ @interface FakeWindowClass : UIWindow @end @implementation FakeWindowClass - (instancetype)initSwizzled { if (self= [super init]) { [self _setWindowControlsStatusBarOrientation:NO]; } return self; } @end
For this part, you declared an Objective-C class named FakeWindowClass, which is a subclass of a UIWindow. Unfortunately, this code will not compile since _setWindowControlsStatusBarOrientation: is a private method.
Jump up to section 0 and forward declare this private method.
//****************************************************/ #pragma mark - Section 0 - Private Declarations //****************************************************/ @interface NSObject() - (void)_setWindowControlsStatusBarOrientation:(BOOL)orientation; @end
This will quiet the compiler and let the code build. The UIDebuggingInformationOverlay‘s init method has checks to return nil. Since the init method was rather simple, you just completely sidestepped this logic and reimplemented it yourself and removed all the “bad stuff”!
Now, replace the code for UIDebuggingInformationOverlay‘s init with FakeWindowClass‘s initSwizzled method. Jump down to section 2 in NSObject‘s load method and replace the load method with the following:
+ (void)load { static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ Class cls = NSClassFromString(@"UIDebuggingInformationOverlay"); NSAssert(cls, @"DBG Class is nil?"); // Swizzle code here [FakeWindowClass swizzleOriginalSelector:@selector(init) withSizzledSelector:@selector(initSwizzled) forClass:cls isClassMethod:NO]; }); }
Rerun and build the Overlay app with this new code. Tap on the UIButton to see what happens now that you’ve replaced the init to produce a valid instance.
UIDebuggingInformationOverlay now pops up without any content. Almost there!
The Final Push
You’re about to build the final snippet of code for the soon-to-be-replacement method of prepareDebuggingOverlay. prepareDebuggingOverlay had an initial check at the beginning of the method to see if _UIGetDebuggingOverlayEnabled() returned 0x0 or 0x1. If this method returned 0x0, then control jumped to the end of the function.
In order to get around this, you’ll you’ll “simulate” a call instruction by pushing a return address onto the stack, but instead of call‘ing, you’ll jmp into an offset past the _UIGetDebuggingOverlayEnabled check. That way, you can perform the function proglogue in your stack frame and directly skip the dreaded check in the beginning of prepareDebuggingOverlay.
In NSObject+UIDebuggingInformationOverlayInjector.m, Navigate down to Section 3 – prepareDebuggingOverlay, and add the following snippet of code:
+ (void)prepareDebuggingOverlaySwizzled { Class cls = NSClassFromString(@"UIDebuggingInformationOverlay"); SEL sel = @selector(prepareDebuggingOverlaySwizzled); Method m = class_getClassMethod(cls, sel); IMP imp = method_getImplementation(m); // 1 void (*methodOffset) = (void *)((imp + (long)27)); // 2 void *returnAddr = &&RETURNADDRESS; // 3 // You'll add some assembly here in a sec RETURNADDRESS: ; // 4 }
Let’s break this crazy witchcraft down:
I want to get the starting address of the original prepareDebuggingOverlay. However, I know this will be swizzled code, so when this code executes, prepareDebuggingOverlaySwizzled will actually point to the real, prepareDebuggingOverlay starting address.
I take the starting address of the original prepareDebuggingOverlay (given to me through the imp variable) and I offset the value in memory past the _UIGetDebuggingOverlayEnabled() check. I used LLDB to figure the exact offset by dumping the assembly and calculating the offset (disassemble -n "+[UIDebuggingInformationOverlay prepareDebuggingOverlay]"). This is insanely brittle as any new code or compiler changes from clang will likely break this. I strongly recommend you calculate this yourself in case this changes past iOS 11.1.1.
Since you are faking a function call, you need an address to return to after this soon-to-be-executed function offset finishes. This is accomplished by getting the address of a declared label. Labels are a not often used feature by normal developers which allow you to jmp to different areas of a function. The use of labels in modern programming is considered bad practice as if/for/while loops can accomplish the same thing… but not for this crazy hack.
This is the declaration of the label RETURNADDRESS. No, you do need that semicolon after the label as the C syntax for a label to have a statement immediately following it.
Time to cap this bad boy off with some sweet inline assembly! Right above the label RETURNADDRESS declaration, add the following inline assembly:
+ (void)prepareDebuggingOverlaySwizzled { Class cls = NSClassFromString(@"UIDebuggingInformationOverlay"); SEL sel = @selector(prepareDebuggingOverlaySwizzled); Method m = class_getClassMethod(cls, sel); IMP imp = method_getImplementation(m); void (*methodOffset) = (void *)((imp + (long)27)); void *returnAddr = &&RETURNADDRESS; __asm__ __volatile__( // 1 "pushq %0\n\t" // 2 "pushq %%rbp\n\t" // 3 "movq %%rsp, %%rbp\n\t" "pushq %%r15\n\t" "pushq %%r14\n\t" "pushq %%r13\n\t" "pushq %%r12\n\t" "pushq %%rbx\n\t" "pushq %%rax\n\t" "jmp *%1\n\t" // 4 : : "r" (returnAddr), "r" (methodOffset)); // 5 RETURNADDRESS: ; // 5 }
Don’t be scared, you’re about to write x86_64 assembly in AT&T format (Apple’s assembler is not a fan of Intel). That __volatile__ is there to hint to the compiler to not try and optimize this away.
You can think of this sort of like C’s printf where the %0 will be replaced by the value supplied by the returnAddr. In x86, the return address is pushed onto the stack right before entering a function. As you know, returnAddr points to an executable address following this assembly. This is how we are faking an actual function call!
The following assembly is copy pasted from the function prologue in the +[UIDebuggingInformationOverlay prepareDebuggingOverlay]. This lets us perform the setup of the function, but allows us to skip the dreaded check.
Finally we are jumping to offset 27 of the prepareDebuggingOverlay after we have set up all the data and stack information we need to not crash. The jmp *%1 will get resolved to jmp‘ing to the value stored at methodOffset. Finally, what are those “r” strings? I won’t get too into the details of inline assembly as I think your head might explode with an information overload (think Scanners), but just know that this is telling the assembler that your assembly can use any register for reading these values.
Jump back up to section 2 where the swizzling is performed in the +load method and add the following line of code to the end of the method:
[self swizzleOriginalSelector:@selector(prepareDebuggingOverlay) withSizzledSelector:@selector(prepareDebuggingOverlaySwizzled) forClass:cls isClassMethod:YES];
Build and run. Tap on the UIButton to execute the required code to setup the UIDebuggingInformationOverlay class, then perform the two-finger tap on the status bar.
Omagerd, can you believe that worked?
I am definitely a fan of the hidden status bar dual tap thing, but let’s say you wanted to bring this up solely from code. Here’s what you can do:
Open ViewController.swift. At the top of the file add:
import UIKit.UIGestureRecognizerSubclass
This will let you set the state of a UIGestureRecognizer (default headers allow only read-only access to the state variable).
Once that’s done, augment the code in overlayButtonTapped(_ sender: Any) to be the following:
@IBAction func overlayButtonTapped(_ sender: Any) { guard let cls = NSClassFromString("UIDebuggingInformationOverlay") as? UIWindow.Type else { print("UIDebuggingInformationOverlay class doesn't exist!") return } cls.perform(NSSelectorFromString("prepareDebuggingOverlay")) let tapGesture = UITapGestureRecognizer() tapGesture.state = .ended let handlerCls = NSClassFromString("UIDebuggingInformationOverlayInvokeGestureHandler") as! NSObject.Type let handler = handlerCls .perform(NSSelectorFromString("mainHandler")) .takeUnretainedValue() let _ = handler .perform(NSSelectorFromString("_handleActivationGesture:"), with: tapGesture) }
Final build and run. Tap on the button and see what happens.
Boom.
Where to Go From Here?
You can download the final project from this tutorial here.
Crazy tutorial, eh? In this chapter, you spelunked into memory and changed dispatch_once_t tokens as well as Booleans in memory to build a POC UIDebuggingInformationOverlay that’s compatible with iOS 11 while getting around Apple’s newly introduced checks to prevent you from using this class.
Then you used Objective-C’s method swizzling to perform the same actions as well as hook into only a portion of the original method, bypassing several short-circuit checks.
This is why reverse engineering Objective-C is so much fun, because you can hook into methods that are quietly called in private code you don’t have the source for and make changes or monitor what it’s doing.
Still have energy after that brutal chapter? This swizzled code will not work on an ARM64 device. You’ll need to look at the assembly and perform an alternative action for that architecture likely through a preprocessor macro.
If you enjoyed what you learned in the tutorial, why not check out the complete Advanced Apple Debugging & Reverse Engineering book, available in our store?
One thing you can be sure of: after reading this book, you’ll have the tools and knowledge to answer even the most obscure question about your code — or even someone else’s.
Questions? Comments? Come join the forum discussion below!
The post Swizzling in iOS 11 with UIDebuggingInformationOverlay appeared first on Ray Wenderlich.
Swizzling in iOS 11 with UIDebuggingInformationOverlay published first on http://ift.tt/2fA8nUr
0 notes
iyarpage · 6 years
Text
Swizzling in iOS 11 with UIDebuggingInformationOverlay
This is an abridged chapter from our book Advanced Apple Debugging & Reverse Engineering, which has been completely updated for Xcode 9.1 and iOS 11. Enjoy!
In this tutorial, you’ll go after a series of private UIKit classes that help aid in visual debugging. The chief of these private classes, UIDebuggingInformationOverlay was introduced in iOS 9.0 and has received widespread attention in May 2017, thanks to an article http://ift.tt/2rYzDhc highlighting these classes and usage.
Unfortunately, as of iOS 11, Apple caught wind of developers accessing this class (likely through the popularity of the above article) and has added several checks to ensure that only internal apps that link to UIKit have access to these private debugging classes.
You’ll explore UIDebuggingInformationOverlay and learn why this class fails to work in iOS 11, as well as explore avenues to get around these checks imposed by Apple by writing to specific areas in memory first through LLDB. Then, you’ll learn alternative tactics you can use to enable UIDebuggingInformationOverlay through Objective-C’s method swizzling.
I specifically require you to use an iOS 11 Simulator for this tutorial as Apple can impose new checks on these classes in the future where I have no intention to “up the ante” when they make this class harder to use or remove it from release UIKit builds altogether.
Between iOS 10 and 11
In iOS 9 & 10, setting up and displaying the overlay was rather trivial. In both these iOS versions, the following LLDB commands were all that was needed:
(lldb) po [UIDebuggingInformationOverlay prepareDebuggingOverlay] (lldb) po [[UIDebuggingInformationOverlay overlay] toggleVisibility]
This would produce the following overlay:
If you have an iOS 10 Simulator on your computer, I’d recommend you attach to any iOS process and try the above LLDB commands out so you know what is expected.
Unfortunately, some things changed in iOS 11. Executing the exact same LLDB commands in iOS 11 will produce nothing.
To understand what’s happening, you need to explore the overridden methods UIDebuggingInformationOverlay contains and wade into the assembly.
Use LLDB to attach to any iOS 11.x Simulator process, this can MobileSafari, SpringBoard, or your own work. It doesn’t matter if it’s your own app or not, as you will be exploring assembly in the UIKit module.
For this example, I’ll launch the Photos application in the Simulator. Head on over to Terminal, then type the following:
(lldb) lldb -n MobileSlideShow
Once you’ve attached to any iOS Simulator process, use LLDB to search for any overridden methods by the UIDebuggingInformationOverlay class.
You can use the image lookup LLDB command:
(lldb) image lookup -rn UIDebuggingInformationOverlay
Or alternatively, you can use the methods command you create in Chapter 14 of the book, “Dynamic Frameworks”:
(lldb) methods UIDebuggingInformationOverlay
The following command would be equivalent to that:
(lldb) exp -lobjc -O -- [UIDebuggingInformationOverlay _shortMethodDescription]
Take note of the overridden init instance method found in the output of either command.
You’ll need to explore what this init is doing. You can follow along with LLDB’s disassemble command, but for visual clarity, I’ll use my own custom LLDB disassembler, dd, which outputs in color and is available here: http://ift.tt/2qRRhWC.
Here’s the init method’s assembly in iOS 10. If you want to follow along in black & white in LLDB, type:
(lldb) disassemble -n "-[UIDebuggingInformationOverlay init]"
Again, this is showing the assembly of this method in iOS 10.
Colors (and dd‘s comments marked in green) make reading x64 assembly soooooooooooo much easier. In pseudo-Objective-C code, this translates to the following:
@implementation UIDebuggingInformationOverlay - (instancetype)init { if (self = [super init]) { [self _setWindowControlsStatusBarOrientation:NO]; } return self; } @end
Nice and simple for iOS 10. Let’s look at the same method for iOS 11:
This roughly translates to the following:
@implementation UIDebuggingInformationOverlay - (instancetype)init { static BOOL overlayEnabled = NO; static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ overlayEnabled = UIDebuggingOverlayIsEnabled(); }); if (!overlayEnabled) { return nil; } if (self = [super init]) { [self _setWindowControlsStatusBarOrientation:NO]; } return self; } @end
There are checks enforced in iOS 11 thanks to UIDebuggingOverlayIsEnabled() to return nil if this code is not an internal Apple device.
You can verify these disappointing precautions yourself by typing the following in LLDB on a iOS 11 Simulator:
(lldb) po [UIDebuggingInformationOverlay new]
This is a shorthand way of alloc/init‘ing an UIDebuggingInformationOverlay. You’ll get nil.
With LLDB, disassemble the first 10 lines of assembly for -[UIDebuggingInformationOverlay init]:
(lldb) disassemble -n "-[UIDebuggingInformationOverlay init]" -c10
Your assembly won’t be color coded, but this is a small enough chunk to understand what’s going on.
Your output will look similar to:
UIKit`-[UIDebuggingInformationOverlay init]: 0x10d80023e <+0>: push rbp 0x10d80023f <+1>: mov rbp, rsp 0x10d800242 <+4>: push r14 0x10d800244 <+6>: push rbx 0x10d800245 <+7>: sub rsp, 0x10 0x10d800249 <+11>: mov rbx, rdi 0x10d80024c <+14>: cmp qword ptr [rip + 0x9fae84], -0x1 ; UIDebuggingOverlayIsEnabled.__overlayIsEnabled + 7 0x10d800254 <+22>: jne 0x10d8002c0 ; <+130> 0x10d800256 <+24>: cmp byte ptr [rip + 0x9fae73], 0x0 ; mainHandler.onceToken + 7 0x10d80025d <+31>: je 0x10d8002a8 ; <+106>
Pay close attention to offset 14 and 22:
0x10d80024c <+14>: cmp qword ptr [rip + 0x9fae84], -0x1 ; UIDebuggingOverlayIsEnabled.__overlayIsEnabled + 7 0x10d800254 <+22>: jne 0x10d8002c0 ; <+130>
Thankfully, Apple includes the DWARF debugging information with their frameworks, so we can see what symbols they are using to access certain memory addresses.
Take note of the UIDebuggingOverlayIsEnabled.__overlayIsEnabled + 7 comment in the disassembly. I actually find it rather annoying that LLDB does this and would consider this a bug. Instead of correctly referencing a symbol in memory, LLDB will reference the previous value in its comments and add a + 7. The value at UIDebuggingOverlayIsEnabled.__overlayIsEnabled + 7 is what we want, but the comment is not helpful, because it has the name of the wrong symbol in its disassembly. This is why I often choose to use my dd command over LLDB’s, since I check for this off-by one error and replace it with my own comment.
But regardless of the incorrect name LLDB is choosing in its comments, this address is being compared to -1 (aka 0xffffffffffffffff in a 64-bit process) and jumps to a specific address if this address doesn’t contain -1. Oh… and now that we’re on the subject, dispatch_once_t variables start out as 0 (as they are likely static) and get set to -1 once a dispatch_once block completes (hint, hint).
Yes, this first check in memory is seeing if code should be executed in a dispatch_once block. You want the dispatch_once logic to be skipped, so you’ll set this value in memory to -1.
From the assembly above, you have two options to obtain the memory address of interest:
You can combine the RIP instruction pointer with the offset to get the load address. In my assembly, I can see this address is located at [rip + 0x9fae84]. Remember, the RIP register will resolve to the next row of assembly since the program counter increments, then executes an instruction.
This means that [rip + 0x9fae84] will resolve to [0x10d800254 + 0x9fae84] in my case. This will then resolve to 0x000000010e1fb0d8, the memory address guarding the overlay from being initialized.
You can use LLDB’s image lookup command with the verbose and symbol option to find the load address for UIDebuggingOverlayIsEnabled.__overlayIsEnabled.
(lldb) image lookup -vs UIDebuggingOverlayIsEnabled.__overlayIsEnabled
From the output, look for the range field for the end address. Again, this is due to LLDB not giving you the correct symbol. For my process, I got range = [0x000000010e1fb0d0-0x000000010e1fb0d8). This means the byte of interest for me is located at: 0x000000010e1fb0d8. If I wanted to know the symbol this address is actually referring to, I can type:
(lldb) image lookup -a 0x000000010e1fb0d8
Which will then output:
Address: UIKit[0x00000000015b00d8] (UIKit.__DATA.__bss + 24824) Summary: UIKit`UIDebuggingOverlayIsEnabled.onceToken
This UIDebuggingOverlayIsEnabled.onceToken is the correct name of the symbol you want to go after.
Bypassing Checks by Changing Memory
We now know the exact bytes where this Boolean check occurs.
Let’s first see what value this has:
(lldb) x/gx 0x000000010e1fb0d8
This will dump out 8 bytes in hex located at 0x000000010e1fb0d8 (your address will be different). If you’ve executed the po [UIDebuggingInformationOverlay new] command earlier, you’ll see -1; if you haven’t, you’ll see 0.
Let’s change this. In LLDB type:
(lldb) mem write 0x000000010e1fb0d8 0xffffffffffffffff -s 8
The -s option specifies the amount of bytes to write to. If typing out 16 f’s is unappealing to you, there’s always alternatives to complete the same task. For example, the following would be equivalent:
(lldb) po *(long *)0x000000010e1fb0d0 = -1
You can of course verify your work be just examining the memory again.
(lldb) x/gx 0x000000010e1fb0d8
The output should be 0xffffffffffffffff now.
Your Turn
I just showed you how to knock out the initial check for UIDebuggingOverlayIsEnabled.onceToken to make the dispatch_once block think it has already run, but there’s one more check that will hinder your process.
Re-run the disassemble command you typed earlier:
(lldb) disassemble -n "-[UIDebuggingInformationOverlay init]" -c10
At the very bottom of output are these two lines:
0x10d800256 <+24>: cmp byte ptr [rip + 0x9fae73], 0x0 ; mainHandler.onceToken + 7 0x10d80025d <+31>: je 0x10d8002a8 ; <+106>
This mainHandler.onceToken is again, the wrong symbol; you care about the symbol immediately following it in memory. I want you to perform the same actions you did on UIDebuggingOverlayIsEnabled.__overlayIsEnabled, but instead apply it to the memory address pointed to by the mainHandler.onceToken symbol. Once you perform the RIP arithmetic, referencing mainHandler.onceToken, you’ll realize the correct symbol, UIDebuggingOverlayIsEnabled.__overlayIsEnabled, is the symbol you are after.
You first need to the find the location of mainHandler.onceToken in memory. You can either perform the RIP arithmetic from the above assembly or use image lookup -vs mainHandler.onceToken to find the end location. Once you found the memory address, write a -1 value into this memory address.
Verifying Your Work
Now that you’ve successfully written a -1 value to mainHandler.onceToken, it’s time to check your work to see if any changes you’ve made have bypassed the initialization checks.
In LLDB type:
(lldb) po [UIDebuggingInformationOverlay new]
Provided you correctly augmented the memory, you’ll be greeted with some more cheery output:
<UIDebuggingInformationOverlay: 0x7fb622107860; frame = (0 0; 768 1024); hidden = YES; gestureRecognizers = <NSArray: 0x60400005aac0>; layer = <UIWindowLayer: 0x6040000298a0>>
And while you’re at it, make sure the class method overlay returns a valid instance:
(lldb) po [UIDebuggingInformationOverlay overlay]
If you got nil for either of the above LLDB commands, make sure you have augmented the correct addresses in memory. If you’re absolutely sure you have augmented the correct addresses and you still get a nil return value, make sure you’re running either the iOS 11.0-11.1 Simulator as Apple could have added additional checks to prevent this from working in a version since this tutorial was written!
If all goes well, and you have a valid instance, let’s put this thing on the screen!
In LLDB, type:
(lldb) po [[UIDebuggingInformationOverlay overlay] toggleVisibility]
Then resume the process:
(lldb) continue
Alright… we got something on the screen, but it’s blank!?
Sidestepping Checks in prepareDebuggingOverlay
The UIDebuggingInformationOverlay is blank because we didn’t call the class method, +[UIDebuggingInformationOverlay prepareDebuggingOverlay]
Dumping the assembly for this method, we can see one concerning check immediately:
Offsets 14, 19, and 21. Call a function named _UIGetDebuggingOverlayEnabled test if AL (RAX‘s single byte cousin) is 0. If yes, jump to the end of this function. The logic in this function is gated by the return value of _UIGetDebuggingOverlayEnabled.
Since we are still using LLDB to build a POC, let’s set a breakpoint on this function, step out of _UIGetDebuggingOverlayEnabled, then augment the value stored in the AL register before the check in offset 19 occurs.
Create a breakpoint on _UIGetDebuggingOverlayEnabled:
(lldb) b _UIGetDebuggingOverlayEnabled
LLDB will indicate that it’s successfully created a breakpoint on the _UIGetDebuggingOverlayEnabled method.
Now, let’s execute the [UIDebuggingInformationOverlay prepareDebuggingOverlay] method, but have LLDB honor breakpoints. Type the following:
(lldb) exp -i0 -O -- [UIDebuggingInformationOverlay prepareDebuggingOverlay]
This uses the -i option that determines if LLDB should ignore breakpoints. You’re specifying 0 to say that LLDB shouldn’t ignore any breakpoints.
Provided all went well, execution will start in the prepareDebuggingOverlay method and call out to the _UIGetDebuggingOverlayEnabled where execution will stop.
Let’s just tell LLDB to resume execution until it steps out of this _UIGetDebuggingOverlayEnabled function:
(lldb) finish
Control flow will finish up in _UIGetDebuggingOverlayEnabled and we’ll be back in the prepareDebuggingOverlay method, right before the test of the AL register on offset 19:
UIKit`+[UIDebuggingInformationOverlay prepareDebuggingOverlay]: 0x11191a312 <+0>: push rbp 0x11191a313 <+1>: mov rbp, rsp 0x11191a316 <+4>: push r15 0x11191a318 <+6>: push r14 0x11191a31a <+8>: push r13 0x11191a31c <+10>: push r12 0x11191a31e <+12>: push rbx 0x11191a31f <+13>: push rax 0x11191a320 <+14>: call 0x11191b2bf ; _UIGetDebuggingOverlayEnabled -> 0x11191a325 <+19>: test al, al 0x11191a327 <+21>: je 0x11191a430 ; <+286> 0x11191a32d <+27>: lea rax, [rip + 0x9fc19c] ; UIApp
Through LLDB, print out the value in the AL register:
(lldb) p/x $al
Unless you work at a specific fruit company inside a fancy new “spaceship” campus, you’ll likely get 0x00.
Change this around to 0xff:
(lldb) po $al = 0xff
Let’s verify this worked by single instruction stepping:
(lldb) si
This will get you onto the following line:
je 0x11191a430 ; <+286>
If AL was 0x0 at the time of the test assembly instruction, this will move you to offset 286. If AL wasn’t 0x0 at the time of the test instruction, you’ll keep on executing without the conditional jmp instruction.
Make sure this succeeded by performing one more instruction step.
(lldb) si
If you’re on offset 286, this has failed and you’ll need to repeat the process. However, if you find the instruction pointer has not conditionally jumped, then this has worked!
There’s nothing more you need to do now, so resume execution in LLDB:
(lldb) continue
So, what did the logic do exactly in +[UIDebuggingInformationOverlay prepareDebuggingOverlay]?
To help ease the visual burden, here is a rough translation of what the +[UIDebuggingInformationOverlay prepareDebuggingOverlay] method is doing:
+ (void)prepareDebuggingOverlay { if (_UIGetDebuggingOverlayEnabled()) { id handler = [UIDebuggingInformationOverlayInvokeGestureHandler mainHandler]; UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:handler action:@selector(_handleActivationGesture:)]; [tapGesture setNumberOfTouchesRequired:2]; [tapGesture setNumberOfTapsRequired:1]; [tapGesture setDelegate:handler]; UIView *statusBarWindow = [UIApp statusBarWindow]; [statusBarWindow addGestureRecognizer:tapGesture]; } }
This is interesting: There is logic to handle a two finger tap on UIApp’s statusBarWindow. Once that happens, a method called _handleActivationGesture: will be executed on a UIDebuggingInformationOverlayInvokeGestureHandler singleton, mainHandler.
That makes you wonder what’s the logic in -[UIDebuggingInformationOverlayInvokeGestureHandler _handleActivationGesture:] is for?
A quick assembly dump using dd brings up an interesting area:
The UITapGestureRecognizer instance passed in by the RDI register, is getting the state compared to the value 0x3 (see offset 30). If it is 3, then control continues, while if it’s not 3, control jumps towards the end of the function.
A quick lookup in the header file for UIGestureRecognizer, tells us the state has the following enum values:
typedef NS_ENUM(NSInteger, UIGestureRecognizerState) { UIGestureRecognizerStatePossible, UIGestureRecognizerStateBegan, UIGestureRecognizerStateChanged, UIGestureRecognizerStateEnded, UIGestureRecognizerStateCancelled, UIGestureRecognizerStateFailed, UIGestureRecognizerStateRecognized = UIGestureRecognizerStateEnded };
Counting from 0, we can see control will only execute the bulk of the code if the UITapGestureRecognizer‘s state is equal to UIGestureRecognizerStateEnded.
So what does this mean exactly? Not only did UIKit developers put restrictions on accessing the UIDebuggingInformationOverlay class (which you’ve already modified in memory), they’ve also added a “secret” UITapGestureRecognizer to the status bar window that executes the setup logic only when you complete a two finger tap on it.
How cool is that?
So, Recapping…
Before we try this thing out, let’s quickly recap what you did just in case you need to restart fresh:
You found the memory address of UIDebuggingOverlayIsEnabled.onceToken:
(lldb) image lookup -vs UIDebuggingOverlayIsEnabled.onceToken
And then set it to -1 via LLDB’s memory write or just casting the address to a long pointer and setting the value to -1 like so:
(lldb) po *(long *)0x000000010e1fb0d0 = -1
You also performed the same action for UIDebuggingOverlayIsEnabled.__overlayIsEnabled.
You then created a breakpoint on _UIGetDebuggingOverlayEnabled(), executed the +[UIDebuggingInformationOverlay prepareDebuggingOverlay] command and changed the return value that _UIGetDebuggingOverlayEnabled() produced so the rest of the method could continue to execute.
This was one of the many ways to bypass Apple’s new iOS 11 checks to prevent you from using these classes.
Trying This Out
Since you’re using the Simulator, this means you need to hold down Option on the keyboard to simulate two touches. Once you get the two touches parallel, hold down the Shift key to drag the tap circles around the screen. Position the tap circles on the status bar of your application, and then click.
You’ll be greeted with the fully functional UIDebuggingInformationOverlay!
Introducing Method Swizzling
Reflecting, how long did that take? In addition, we have to manually set this through LLDB everytime UIKit gets loaded into a process. Finding and setting these values in memory can definitely be done through a custom LLDB script, but there’s an elegant alternative using Objective-C’s method swizzling.
But before diving into how, let’s talk about the what.
Method swizzling is the process of dynamically changing what an Objective-C method does at runtime. Compiled code in the __TEXT section of a binary can’t be modified (well, it can with the proper entitlements that Apple will not give you, but we won’t get into that). However, when executing Objective-C code, objc_msgSend comes into play. In case you forgot, objc_msgSend will take an instance (or class), a Selector and a variable number of arguments and jump to the location of the function.
Method swizzling has many uses, but oftentimes people use this tactic to modify a parameter or return value. Alternatively, they can snoop and see when a function is executing code without searching for references in assembly. In fact, Apple even (precariously) uses method swizzling in it’s own codebase like KVO!
Since the internet is full of great references on method swizzling, I won’t start at square one (but if you want to, I’d say http://ift.tt/NZjTWH has the clearest and cleanest discussion of it). Instead, we’ll start with the basic example, then quickly ramp up to something I haven’t seen anyone do with method swizzling: use it to jump into an offset of a method to avoid any unwanted checks!
Finally — Onto A Sample Project
Included in this tutorial is a sample project named Overlay, which you can download here. It’s quite minimal; it only has a UIButton smack in the middle that executes the expected logic to display the UIDebuggingInformationOverlay.
You’ll build an Objective-C NSObject category to perform the Objective-C swizzling on the code of interest as soon as the module loads, using the Objective-C-only load class method.
Build and run the project. Tap on the lovely UIButton. You’ll only get some angry output from stderr saying:
UIDebuggingInformationOverlay 'overlay' method returned nil
As you already know, this is because of the short-circuited overriden init method for UIDebuggingInformationOverlay.
Let’s knock out this easy swizzle first; open NSObject+UIDebuggingInformationOverlayInjector.m. Jump to Section 1, marked by a pragma. In this section, add the following Objective-C class:
//****************************************************/ #pragma mark - Section 1 - FakeWindowClass //****************************************************/ @interface FakeWindowClass : UIWindow @end @implementation FakeWindowClass - (instancetype)initSwizzled { if (self= [super init]) { [self _setWindowControlsStatusBarOrientation:NO]; } return self; } @end
For this part, you declared an Objective-C class named FakeWindowClass, which is a subclass of a UIWindow. Unfortunately, this code will not compile since _setWindowControlsStatusBarOrientation: is a private method.
Jump up to section 0 and forward declare this private method.
//****************************************************/ #pragma mark - Section 0 - Private Declarations //****************************************************/ @interface NSObject() - (void)_setWindowControlsStatusBarOrientation:(BOOL)orientation; @end
This will quiet the compiler and let the code build. The UIDebuggingInformationOverlay‘s init method has checks to return nil. Since the init method was rather simple, you just completely sidestepped this logic and reimplemented it yourself and removed all the “bad stuff”!
Now, replace the code for UIDebuggingInformationOverlay‘s init with FakeWindowClass‘s initSwizzled method. Jump down to section 2 in NSObject‘s load method and replace the load method with the following:
+ (void)load { static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ Class cls = NSClassFromString(@"UIDebuggingInformationOverlay"); NSAssert(cls, @"DBG Class is nil?"); // Swizzle code here [FakeWindowClass swizzleOriginalSelector:@selector(init) withSizzledSelector:@selector(initSwizzled) forClass:cls isClassMethod:NO]; }); }
Rerun and build the Overlay app with this new code. Tap on the UIButton to see what happens now that you’ve replaced the init to produce a valid instance.
UIDebuggingInformationOverlay now pops up without any content. Almost there!
The Final Push
You’re about to build the final snippet of code for the soon-to-be-replacement method of prepareDebuggingOverlay. prepareDebuggingOverlay had an initial check at the beginning of the method to see if _UIGetDebuggingOverlayEnabled() returned 0x0 or 0x1. If this method returned 0x0, then control jumped to the end of the function.
In order to get around this, you’ll you’ll “simulate” a call instruction by pushing a return address onto the stack, but instead of call‘ing, you’ll jmp into an offset past the _UIGetDebuggingOverlayEnabled check. That way, you can perform the function proglogue in your stack frame and directly skip the dreaded check in the beginning of prepareDebuggingOverlay.
In NSObject+UIDebuggingInformationOverlayInjector.m, Navigate down to Section 3 – prepareDebuggingOverlay, and add the following snippet of code:
+ (void)prepareDebuggingOverlaySwizzled { Class cls = NSClassFromString(@"UIDebuggingInformationOverlay"); SEL sel = @selector(prepareDebuggingOverlaySwizzled); Method m = class_getClassMethod(cls, sel); IMP imp = method_getImplementation(m); // 1 void (*methodOffset) = (void *)((imp + (long)27)); // 2 void *returnAddr = &&RETURNADDRESS; // 3 // You'll add some assembly here in a sec RETURNADDRESS: ; // 4 }
Let’s break this crazy witchcraft down:
I want to get the starting address of the original prepareDebuggingOverlay. However, I know this will be swizzled code, so when this code executes, prepareDebuggingOverlaySwizzled will actually point to the real, prepareDebuggingOverlay starting address.
I take the starting address of the original prepareDebuggingOverlay (given to me through the imp variable) and I offset the value in memory past the _UIGetDebuggingOverlayEnabled() check. I used LLDB to figure the exact offset by dumping the assembly and calculating the offset (disassemble -n "+[UIDebuggingInformationOverlay prepareDebuggingOverlay]"). This is insanely brittle as any new code or compiler changes from clang will likely break this. I strongly recommend you calculate this yourself in case this changes past iOS 11.1.1.
Since you are faking a function call, you need an address to return to after this soon-to-be-executed function offset finishes. This is accomplished by getting the address of a declared label. Labels are a not often used feature by normal developers which allow you to jmp to different areas of a function. The use of labels in modern programming is considered bad practice as if/for/while loops can accomplish the same thing… but not for this crazy hack.
This is the declaration of the label RETURNADDRESS. No, you do need that semicolon after the label as the C syntax for a label to have a statement immediately following it.
Time to cap this bad boy off with some sweet inline assembly! Right above the label RETURNADDRESS declaration, add the following inline assembly:
+ (void)prepareDebuggingOverlaySwizzled { Class cls = NSClassFromString(@"UIDebuggingInformationOverlay"); SEL sel = @selector(prepareDebuggingOverlaySwizzled); Method m = class_getClassMethod(cls, sel); IMP imp = method_getImplementation(m); void (*methodOffset) = (void *)((imp + (long)27)); void *returnAddr = &&RETURNADDRESS; __asm__ __volatile__( // 1 "pushq %0\n\t" // 2 "pushq %%rbp\n\t" // 3 "movq %%rsp, %%rbp\n\t" "pushq %%r15\n\t" "pushq %%r14\n\t" "pushq %%r13\n\t" "pushq %%r12\n\t" "pushq %%rbx\n\t" "pushq %%rax\n\t" "jmp *%1\n\t" // 4 : : "r" (returnAddr), "r" (methodOffset)); // 5 RETURNADDRESS: ; // 5 }
Don’t be scared, you’re about to write x86_64 assembly in AT&T format (Apple’s assembler is not a fan of Intel). That __volatile__ is there to hint to the compiler to not try and optimize this away.
You can think of this sort of like C’s printf where the %0 will be replaced by the value supplied by the returnAddr. In x86, the return address is pushed onto the stack right before entering a function. As you know, returnAddr points to an executable address following this assembly. This is how we are faking an actual function call!
The following assembly is copy pasted from the function prologue in the +[UIDebuggingInformationOverlay prepareDebuggingOverlay]. This lets us perform the setup of the function, but allows us to skip the dreaded check.
Finally we are jumping to offset 27 of the prepareDebuggingOverlay after we have set up all the data and stack information we need to not crash. The jmp *%1 will get resolved to jmp‘ing to the value stored at methodOffset. Finally, what are those “r” strings? I won’t get too into the details of inline assembly as I think your head might explode with an information overload (think Scanners), but just know that this is telling the assembler that your assembly can use any register for reading these values.
Jump back up to section 2 where the swizzling is performed in the +load method and add the following line of code to the end of the method:
[self swizzleOriginalSelector:@selector(prepareDebuggingOverlay) withSizzledSelector:@selector(prepareDebuggingOverlaySwizzled) forClass:cls isClassMethod:YES];
Build and run. Tap on the UIButton to execute the required code to setup the UIDebuggingInformationOverlay class, then perform the two-finger tap on the status bar.
Omagerd, can you believe that worked?
I am definitely a fan of the hidden status bar dual tap thing, but let’s say you wanted to bring this up solely from code. Here’s what you can do:
Open ViewController.swift. At the top of the file add:
import UIKit.UIGestureRecognizerSubclass
This will let you set the state of a UIGestureRecognizer (default headers allow only read-only access to the state variable).
Once that’s done, augment the code in overlayButtonTapped(_ sender: Any) to be the following:
@IBAction func overlayButtonTapped(_ sender: Any) { guard let cls = NSClassFromString("UIDebuggingInformationOverlay") as? UIWindow.Type else { print("UIDebuggingInformationOverlay class doesn't exist!") return } cls.perform(NSSelectorFromString("prepareDebuggingOverlay")) let tapGesture = UITapGestureRecognizer() tapGesture.state = .ended let handlerCls = NSClassFromString("UIDebuggingInformationOverlayInvokeGestureHandler") as! NSObject.Type let handler = handlerCls .perform(NSSelectorFromString("mainHandler")) .takeUnretainedValue() let _ = handler .perform(NSSelectorFromString("_handleActivationGesture:"), with: tapGesture) }
Final build and run. Tap on the button and see what happens.
Boom.
Where to Go From Here?
You can download the final project from this tutorial here.
Crazy tutorial, eh? In this chapter, you spelunked into memory and changed dispatch_once_t tokens as well as Booleans in memory to build a POC UIDebuggingInformationOverlay that’s compatible with iOS 11 while getting around Apple’s newly introduced checks to prevent you from using this class.
Then you used Objective-C’s method swizzling to perform the same actions as well as hook into only a portion of the original method, bypassing several short-circuit checks.
This is why reverse engineering Objective-C is so much fun, because you can hook into methods that are quietly called in private code you don’t have the source for and make changes or monitor what it’s doing.
Still have energy after that brutal chapter? This swizzled code will not work on an ARM64 device. You’ll need to look at the assembly and perform an alternative action for that architecture likely through a preprocessor macro.
If you enjoyed what you learned in the tutorial, why not check out the complete Advanced Apple Debugging & Reverse Engineering book, available in our store?
Here’s a taste of what’s in the book:
One thing you can be sure of: after reading this book, you’ll have the tools and knowledge to answer even the most obscure question about your code — or even someone else’s.
Questions? Comments? Come join the forum discussion below!
The post Swizzling in iOS 11 with UIDebuggingInformationOverlay appeared first on Ray Wenderlich.
Swizzling in iOS 11 with UIDebuggingInformationOverlay published first on http://ift.tt/2fA8nUr
0 notes