#(like for example. i was trying to run state setters in the function for the component. very bad! i need a useEvent hook. now i know)
Explore tagged Tumblr posts
Text
important to note that the only path to failing a class that i saw & was worried about (does not include paths where i just straight up give up. only paths where i try at least a little bit but am unsuccessful) is now closed. i had contributed 0 to this one group project, and was super lost and didn’t really know HOW to contribute, and said “hey guys i noticed i haven’t really had much chance to contribute, i don’t have a lot of ideas on how to help but i really want to pull my weight so if there’s anything you think i should do pls lmk” (to be clear this is not a “divide up the work & go” kinda project. we are trying to build something we don’t really know how to build. so one guy was like “i could try to build it this way?” and built it and now we’re, vaguely, improving on that) and somebody was like “yeah it’s tough to divide things up, maybe you could make a react app to present it?” so i made a react app. so now i’ve helped and nobody is going to go “professor she hasn’t helped!” and he isn’t going to say “0 for you, woman” and i won’t auto-fail. 🥳
#post tag#girl who is going to be ok#baby’s first from scratch react app everyone say you’re proud#look at her she’s a frontend dev!#(i used a template. i assume i’m allowed to use a template and still say it’s from scratch)#took me 3 ish hours i’m so goated#(… it was simple but also not. i had some issues)#(some issues i’ll never have again! that’s the beauty of baby’s first from scratch [x])#(like for example. i was trying to run state setters in the function for the component. very bad! i need a useEvent hook. now i know)
0 notes
Text
Art, Feminism, & The Distribution of The Sensible
It has taken me quite a while to actually write anything on this blog, not really due to a lack of ideas or a lack of inspiring and transpiring events, but I guess mostly I wanted to take the time and be able to define the dimensions of what I aim for this blog to be. Define its boundaries and scope of focus. Perhaps its also because I did not want anything I post here to be of an academic nature, but where I can successfully export my perspectives without really sounding like I’m submitting a paper to a professor. However, in terms of defining the focus of this blog, which might simply be transmitting my perspective to you, the reader, this blog cannot be but an extension of myself. So here, I’m going to write from the first person perspective. I hope that to you this seems valid enough as reliable information, but the self as a vehicle of experience, for me at least, is anything but negligible.
Also, shoutout to Naeem Inayatullah of Ithica University for importing the narrative approach to political science.
As a precursor however, I would like you to take the time and look at a paper entitled “Neoliberalism As Creative Destruction” by David Harvey. This should really help you understand where I am coming from as an individual. But do keep in mind that despite me in the future or the past sounding like a left-wing marxist geographer, I am not. I am in fact neither a supporter of the capitalist way of life, nor the socio-economic, post-capitalistic arrangement that Marx has presented in his Manifesto. I am simply myself, perhaps in terms of background a Humanist... but aren’t we all?
A point to stress here is the current cultural phenomena of extremes. For some reason, the polarization of thought and ideology seems to be the status-quo of our times and generation. I am someone who would like to belong to the middle path.
Now to get to the point behind this post and why I have asked you to read Harvey’s paper. Neoliberal capitalism is best described as the shift from an industrial based economy, to one that is mostly characterized by the service sector industries. Therefor this calls for a shift in the nature and qualifications of the workforces required to run such economies, and ultimately to populate their cities. To make a long story short, major cities across the globe compete amongst each other to attract as many members of the “creative class” as possible to empower their industries. The creative class is the term used to describe the modern service sector workforce, mainly graphic designer, app developers, architects, writers, photographers, lawyers, gig industry execs, and overall modern service providers. This activity automatically changes the layout of a city. Gentrified neighborhoods, galleries, art spaces, installations, film exhibitions, speciality coffee shops, and artsy food concepts come in to fill the urban geography. This has previously been described by Sharon Zukin as “pacification by cappuccino”.
The point I try to make here however is not to describe this shift, nor the socio-economic gaps that are produced through the need of low-wage service sector cohorts and middle to upper class entrepreneurs. Basically baristas and CEO’s. I’m writing this to describe a slight crisis that arises with art and culture being owned or introduced into the realm of state-led enterprises. Some have described this to be the death of art with regards to the traditional function and definition of the role art has played in the social and cultural sphere.
Art, whether visual or theoretical, be it a book, a sculpture, a painting, a photograph, or a film, has always participated in shaping what is described generally as “The Distribution of The Sensible”. Lets just call it the realm of the ‘taken for granted’, or socially and culturally accepted notions and activities as, well as modes of thought of a certain society. Art has always navigated on the boundaries of this realm, challenging its norms and its taboos, presenting the internal unheard struggles of the unheard, and representing an outlet for the expression of their experiences. This was not simply limited to the excluded or the impoverished, but the right of each and every single individual. Art was a valid way to say what could not be said, to contest and to reconsider, to hold up a mirror and allow everyone to take a closer look at their blemishes and open pores. Yet with a state owning art, validating it, empowering, and ultimately, utilizing it as a capitalization opportunity, art is automatically stripped of all of its characteristics as well as its power. Art looses its voice and influence by simply gaining the recognition and normality from that which considered it novelty and abnormal.
Today we have art biennials curated by cities through their local authorities. They specify spaces for galleries, times and dates for events, as well as showcase certain artists to be celebrated and presented as the trend setters and pioneers. This applicable to all fields of artistic format; fashion, film, tangible and intangible. The problematic that arises from this dialectic is the control of the expression of art itself, and what is brought to light and left in the shadows is then decided by the hegemony. This could very easily be influenced by agendas, approval and disapproval bestowed by unknown committees. Yet art is not allowed to speak out upon it, resistance becomes illegitimate because art is recognized and now operates within the distribution of the sensible as to previously operating outside of it. You have been recognized, therefore you are not allowed to complain. We have heard your voice and provided you a platform. Thats basically what this procurement entails.
Another issue that comes to hand under this light is the over saturation of artistic expression on display everywhere, in every nook and cranny, in every alley and street.. all commissioned.. all artificial.. that really neutralizes the viewer to the impact and meaning of art. I find myself being less interested and less amazed... let alone less curious when I look at art these days. Have I always been this way? no.. not really.. I have noticed myself change however..it was exciting in 2011 up to 2014 when things were really gearing up towards the cultural turn.. but its 2019 now and not much has changed... and in 2019 I find myself incapable and indifferent.
If you wish to understand this better, the work of professor Jenny Edkins on protests and governmental expansion could be of aid. I have personally had the pleasure and honor to be taught by her under a few modules. What Edkins basically states is the following; if a certain faction of the population who feels oppressed or excluded decided to demonstrate or protest their struggles to the state, the state itself, by recognizing their protests and answering their requests automatically expands its control over the different cohorts of a said population. This faction now can no longer use its voice to demand a certain right, but another representative may do so, yet only once. I have theorized something close to this which I call The Morbid Loop of Misconduct that I might discuss in the following posts.
This ownership and capitalization of such fields to be utilized as bait for branding cities and determining their respective levels of ‘coolness’ has reached its limits with testing my tolerance when feminist initiatives got introduced into the mix. We have recently celebrated the International Women’s Day... congratulations by the way... yet many businesses have gone to utilize this day as an opportunity to brand themselves as understanding and co-operative, as good listeners.. as celebrators of feminism.. in order to increase sales and foster public attention. I can pull out so many examples from my instagram feed but there is one photo that I deem appropriate in explaining what I try to describe, a post entitled: Man Visibly Upset After International Women’s Day Post Doesn’t Get The Likes He Banked On. (https://www.instagram.com/p/BuwFxPEFmBQ/) It was intended for a meme.. yet to me it described anything but a laughing matter.... perhaps memes might be art’s last frontier.. who knows.. we’ll let neoliberalism decide. The ownership of feminism by the socio-economic hegemony, and currently in the pipeline LGBT discourse, is going to bring about a new frontier of markets. The market of civil right attraction.
On a second thought though, this is nothing new. The ownership of the state of feminist discourse has been used throughout history for not so human-centered nor humane reasons. If for instance we take Marx’s following statement, “the executive of the modern state is but a committee for managing the common affairs of the bourgeoisie” .. the truth of which I will deal with in a later post, we come to understand that feminism has been used to further service the machine of capitalism and lubricate its gears and cogs for more efficient outcomes since its ownership by state policy. Take for instance the insane push for women in the workforce. Now I am in no way against women having jobs or earning their own living, running businesses, or running countries even, however what we need to address is the socio-economic frameworks within which these ideas were introduced and the outcome that results from this push. The popularization of careerist orientations has many negative impacts on the social fabric itself. This is not simply in the sense of conservatism and maintaining tradition, but disabling cultures when resisting epistemological occupation. Sadly, women have been utilized by this new mode of radical feminism and made to think that they are the patrons of such movements when they are indeed the victims.
Another example is Edward Bernays’ 1929 “Torches of Freedom” movement. The man was a PR freak and put the natural instinct of social impact and cultural roles that all humans have to service a monstrosity of an industry. You know the story, it was culturally frowned upon, and considered a taboo, for women to smoke in public spaces. It was this whole discourse on a cigarette being another phallic symbol and a symbol of men’s power... but basically as far as tobacco companies were concerned, half of the population were a potential market they weren’t tapping into. Along comes Freud’s grandson, and rebrands cigarettes as the torches of freedom, asking women who were posing as suffragettes to march in a protest with lit cigarettes as a move to display the power of the feminine.
Everything that the state has done after obtaining ownership of feminism towards this “cause” has been of the same nature. But what needs to be kept in mind here is that the state in this sense is the collective of corporate. After all what is the purpose of a city? What is the current modern purpose of having an economy? It is to circulate capital through it as efficiently as possible with the largest market share that a state can muster. Read Saskia Sassen’s The Global City for a better perspective.
Anyways... im tired of writing this.. I hope you get the point.
Neoliberal capitalism never fails to surprise me with its creative capacity for destruction.
This post might seem to be of no use.. but it was just me letting off some steam. This is what this is all about... letting off some steam.
#Humanis#humanist#human geography#Sharon Zukin#capitalism#neoliberal capitalism#neoliberalism#david harvey#harvey#feminism#art#ranciere#distribution of the sensible#opinionated spectator#civil rights#sociology#anthropology#urbanism#geography#human
14 notes
·
View notes
Text
Advent of Code 2020: Reflection on Days 8-14
A really exciting week, with a good variety of challenges and relative difficulties. Something tells me that this year, being one where people are waking up later and staying at home all day, the problems have been specifically adapted to be more engaging and interesting to those of us working from home. Now that we've run the gamut of traditional AoC/competitve-programming challenges, I'm excited to see what the last 10 days have in store!
First things first, I have started posting my solutions to GitHub. I hope you find them useful, or at least not too nauseating to look at.
Day 8: To me, this is the quintessential AoC problem: you have a sequence of code-like instructions, along with some metadata the programmer has to keep track of, and there's some minor snit with the (usually non-deterministic) execution you have to identify. Some people in the subreddit feared this problem, thinking it a harbinger of Intcode 2.0. (Just look at that first line... somebody wasn't happy.)
Effectively, I got my struggles with this kind of problem out of the way several years ago: the first couple days of Intcode were my How I Learned to Stop Worrying and Love The While Loop, so this problem was a breeze. It also helps that I've been living and breathing assembly instructions these past few weeks, owing to a course project. I truly must learn, though, to start these problems after I finish my morning coffee, lest I wonder why my code was never executing the "jump" instruction...
Luckily, from here on out, there will be no more coffee-free mornings for me! Part of my partner's Christmas present this year was a proper coffee setup, so as to liberate them from the clutches of instant coffee. I'm not a coffee snob – or, at least, that's what I tell myself – but I was one more half-undrinkable cup of instant coffee away from madness.
Day 9: Bright-eyed, bushy-tailed, and full of fresh-ground and French-pressed coffee, I tackled today's problem on the sofa, between bites of a toasted homemade bagel.
This is a competitive programmer's problem. Or, at least, it would have been, if the dataset was a few orders of magnitude bigger. As of writing, every problem thus far has had even the most naïve solution, so long as it did not contain some massive bottleneck to performance, run in under a second. At first, I complained about this to my roommate, as I felt that the problem setters were being too lenient to solutions without any significant forethought or insight. But, after some thinking, I've changed my tune. Not everything in competitive programming[1] has to be punitive of imperfections in order to be enjoyable. The challenges so far have been fun and interesting, and getting the right answer is just as satisfying if you get it first try or fiftieth.
First off, if I really find myself languishing from boring data, I can always try to make the day more challenging by trying it in an unfamiliar language, or by microprofiling my code and trying to make it as efficient as possible. For example, I'm interested in finding a deterministic, graph theory-based solution to Day 7, such that I don't just search every kind of bag to see which kind leads to the target (i.e., brute-forcing). Maybe I'll give it a shot on the weekend, once MIPS and MARS is just a distant memory. A distant, horrible memory.
Second, even I – a grizzled, if not decorated, competitive and professional programming veteran – have been learning new concepts and facts about my own languages from these easy days. For example, did you know that set membership requests run in O(1) time in Python? That's crazy fast! And here I was, making dictionaries with values like {'a': True} just to check for visitation.
Part 1 was pretty pish-posh. Sure, in worst-case it ran in O(n^2), but when you have a constant search factor of 25 (and not, say, 10^25), that's really not a big deal.
Part 2 is what made me think that today's problem was made for competitive programmers. Whenever a problem mentions sums of contiguous subsets, my brain goes straight for the prefix sum array. They're dead simple to implement: I don't think I've so much as thought about PSAs in years, and I was able to throw mine together without blinking. I did have to use Google to jog my memory as to how to query for non-head values (i.e., looking at running sums not starting from index 0), but the fact that I knew that they could be queried that way at all probably saved me a lot of dev time. Overall complexity was O(nlogn) or thereabouts, and I'm sure that I could have done some strange dynamic programming limbo to determine the answer while I was constructing the PSA, but this is fine. I get the satisfaction of knowing to use a purpose-built data structure (the PSA), and of knowing that my solution probably runs a bit faster than the ultra-naive O(n^3)-type solutions that novice programmers might have come up with, even if both would dispatch the input quickly.
Faffing around on the AoC subreddit between classes, I found a lovely image that I think is going to occupy space in my head for a while. It's certainly easy to get stuck in the mindset of the first diagram, and it's important to centre myself and realize that the second is closer to reality.
Day 10: FML. Path-like problems like this are my bread and butter. Part 1 was easy enough: I found the key insight, that the values had to monotonically increase and thus the list ought to be sorted, pretty quickly, and the only implementation trick was keeping track of the different deltas.
Part 2, on the other hand, finally caught me on my Day 9 hubris: the naïve DFS, after ten minutes and chewing through all of my early-2014 MacBook's RAM, I still didn't have an answer. I tried being creative with optimizing call times; I considered using an adjacency matrix instead of a dictionary-based lookup; and I even considered switching to a recursion-first language like Haskell to boost performance. Ultimately, I stumbled onto the path of
spoilermemoization using `@functools.cache`
,
which frankly should have been my first bet. After some stupid typo problems (like, ahem, commenting out the function decorator), I was slightly embarrassed by just how instantly things ran after that.
As we enter the double-digits, my faith in the problem-setters has been duly restored: just a measly 108-line input was enough to trigger a Heat Death of the Universe execution time without some intelligent intervention. Well done, team!
Day 11: Good ol' Game of Life-style state transition problem. As per usual, I've sweated this type of problem out before, so for the actual implementation, I decided to go for Good Code as the real challenge. I ended up developing – and then refactoring – a single, pure state-transition function, which took in a current state, a neighbour-counting function, and a tolerance for the one element that changes between Parts 1 and 2 (you'll see for yourself), then outputting a tuple of the grid, and whether or not it had changed in the transition. As a result, my method code for Parts 1 and 2 ended up being identical, save for replacing some of the inputs to that state function.
Despite my roommate's protestations, I'm quite proud of my neighbour-counting functions. Sure, one of them uses a next(filter()) shorthand[2] – and both make heavy (ab)use of Python's new walrus operator, but they do a pretty good job making it obvious exactly what conditions they're looking for, while also taking full advantage of logical short-circuiting for conciseness.
Part 2 spoilers My Part 2 neighbour counter was largely inspired by my summertime fascination with constraint-satisfaction problems such as the [N-Queens problem](https://stackoverflow.com/questions/29795516/solving-n-queens-using-python-constraint-resolver). Since I realized that "looking for a seat" in the 8 semi-orthogonal directions was effectively equivalent to a queen's move, I knew that what I was really looking for was a delta value – how far in some [Manhattan-distance](https://www.wikiwand.com/en/Taxicab_geometry) direction I had to travel to find a non-aisle cell. If such a number didn't exist, I knew not to bother looking in that direction.
My simulations, whether due to poor algorithmic design or just on account of it being Python, ran a tad slowly. On the full input, Part 1 runs in about 4 seconds, and Part 2 takes a whopping 17 seconds to run fully. I'll be sure to check the subreddit in the coming hours for the beautiful, linear-algebraic or something-or-other solution that runs in constant time. A programmer I have been for many years; a computer scientist I have yet to become.
Day 12: Not terribly much to say on this one. Only that, if you're going to solve problems, it may be beneficial to read the instructions, lest
spoilers You cause your ship to turn clockwise by 90º... 90 times.
The second part was a fresh take on a relatively tired instruction-sequence problem. The worst part was the feeling of dread I felt while solving, knowing that my roommate – who consistently solves the problems at midnight, whereas I solve them in the morning – was going to awaken another Eldritch beast of Numpy and linear algebra for at least Part 2. Eugh.
Day 13: This was not my problem. I'm going to wrap my entire discussion of the day in spoilers, since I heavily recommend you try to at least stare at this problem for a while before looking at solutions.
spoilers The first part was... fine. The only real trick was figuring out how to represent the concept of "the bus arrives at a certain time" (i.e., modulo), and just compare that to some offset relative to your input departure time. Simulation works perfectly fine as a lazy solution, since your smallest input value is likely to be something like 13 (and thus your simulation time is bounded). The second part? Not so much. I knew that I was cutting corners on the first solution, since this problem was just *screaming* to look more mathy than code-y. And, turns out I was right: the problem could be solved on pen-and-paper if you were so inclined. If you look around on the subreddit and other comparable programmer spaces, you'll see everyone and their mother crying for the [Chinese Remainder Theorem](https://www.dave4math.com/mathematics/chinese-remainder-theorem/) and, since I have to establish boundaries around my time and energy lest I nerd-snipe myself into academic probation, I had to "give up" relatively quickly and learn how to use the algorithm. My roommate was able to come up with a solution on his lonesome, which actually relies on a fact I was also able to come up with before giving in. If you use a simple for-loop search to find numbers which satisfy any **two** of the modulo requirements, you'll quickly realize that the gap between any two succesive numbers is always equal to the product of those two numbers. (Well, technically, their LCM, but the bus routes are prime for a reason.) So, you can pretty quickly conclude that by the end of it, you'll be searching over the naturals with a step of ∏(buses), and the only trick left is to figure out what starting point you need. I think my roommate was at a bit of an advantage, though, owing to his confidence. He's definitely a lot better at math that I am, so he could dive into hunches headlong with a confidence that I lack. I found myself unable to follow hunches due to worry that I was either a) completely missing the point, or b) would accidentally make some critical arithmetic mistake early on that throws off all of my findings. In hindsight, I absolutely *should* have figured out that final Giant Step (hue), and then worked it backwards from the given answer to see what starting points made reasonable sense. But, again, I balked a bit at the sheer enormity of how much I didn't know about this kind of algebra, so I ended up needing a little more Google than brainpower. I'm chalking this problem up as a learning experience, as I truly had never heard of the CRT. I'm sure "linear systems of residue classes" will pop up again in a similar problem, and it's certainly a hell of a lot faster to compute than using sieves or similar algorithms. Also, I learned that Python 3.8 programmers had a distinct advantage over lesser-versioned Pythonistas, owing to the new functionality that was recently added to the `pow` builtin. In short, `pow` can now solve modular inverses, which is a massive timesave over implementing it yourself. I didn't know about this builtin at all, so I've continued to accomplish my goal of better understanding the standard library.
Day 14: The last day of this week! I really enjoyed today's challenge: it was tough, yet accessible from multiple approaches if you weren't a well-learned expert on bitwise masking.
Part 1 was just getting you acquainted with the world of bitmasking and the general workflow of the problem: number in, pass through mask, number out, store in memory. As usual, the formatted text made my Regex Lobe go off, and for once I gave in: it actually made extracting those integers a little easier, as I realized the addresses were of very variable length.
Part 2 was a perfect level of challenge for a Monday morning, methinks. It served me a proper punishment for not reading the updated challenge text appropriately, and I had to think about some clever modifications to my code from Part 1 to make Part 2 work effectively. My final solution wasn't all too efficient, but both parts run in a little under two seconds.
Part 2 spoilers I'm quite proud of my usage of `'0'` to denote a "soft" zero (i.e., the mask does nothing to this bit) and `'Z'` to denote a "hard" zero (i.e., the mask sets this bit to zero). I suppose I could have also inverted the entire mask – setting all `0`s to `X`s and all `X`s to `0`s – to make the old parse function work normally, but this worked just as well and didn't require completely rejigging the masks to make them work a particular way.
[1]: I keep having to stop myself from using the acronym with which I'm familiar, lest I get in trouble with Tumblr's new puritan filters. I wonder if the similar acronym for dynamic programming would be of issue.
[2] If you're unfamiliar, this is a common competitive-programming idiom in Python for "the first element that satisfies..." JavaScript, unfortunately, takes the cake here, as it has a native Array#find method that works much better.
0 notes
Photo
Introduction to Popmotion: Tween
Popmotion is a functional JavaScript animation library. Compared to other libraries like GreenSock or Anime.js, Popmotion is low-level and unopinionated.
It packs a ton of features, like spring physics and pointer tracking, into a very small filesize (11.5kb).
It allows developers to write their own features using simple functions, rather than waiting for the library author to add them.
It also means it's just as easy to animate 3D objects, charts or React components as it is to animate DOM or SVG elements.
This flexibility can make the initial learning curve steeper than for other libraries. So, in this tutorial series, we'll learn the basics of Popmotion's powerful animations. We'll start with the workhorse of the animation world, the tween.
Install
Popmotion supports a variety of installation methods. In production, I recommend installing via npm, as this allows you to import only the bits you need, saving space even further.
However, for this tutorial, you can follow along with this CodePen, which has been set up with the latest version of Popmotion.
Tween
For those unfamiliar, a tween transitions between one number and another over a predetermined length of time. If you've used a CSS transition, Popmotion's tween function works exactly the same.
We can import tween like so:
const { tween } = popmotion;
By default, tween animates between 0 and 1 over a duration of 300 milliseconds. If you open your console, you can test this yourself:
tween().start({ update: v => console.log(v), complete: () => console.log('complete!') });
But we don't want to animate the console—we want to animate the ball. For this, Popmotion includes another function, styler.
Note: In this first example, we defined both the update and complete functions. But if you provide start with only a single function, it will automatically assign it to update.
Styler
styler is used to create get/set interfaces for HTML and SVG styles optimised for use with animations (from any library!).
In the above example, tween is outputting a number, so we could of course set the ball's opacity like this (try it):
const ball = document.querySelector('.ball'); tween().start(v => ball.style.opacity = v);
However, styler has the following benefits:
Batches renders to prevent layout thrashing.
Renders, at most, once per frame.
Allows transform props to be set individually, allowing the independent animation of props like scale and translateX.
Unifies the CSS and SVG transform coordinate models.
Understands default value types, so you can set translateX (for instance) without appending 'px'.
You're also not limited to using it inside an animation. You could manually set an element's style while others are animating, and the change will be automatically scheduled and batched along with the others.
So let's import it:
const { tween, styler } = popmotion;
Create the ball styler:
const ballStyler = styler(ball);
Now we can use ballStyler to set and animate any of the ball's properties. ballStyler.set is flexible. It can set a single property:
ballStyler.set('background', '#f00');
Or multiple properties:
ballStyler.set({ x: 100, y: 100 });
We want to animate opacity for now, so let's change our animation:
tween().start(v => ballStyler.set('opacity', v));
set can also be curried. By providing it just a property name, it will return a setter function for that prop. So we can neaten the above by writing:
tween().start(ballStyler.set('opacity'));
So far, we've only animated the ball using the default tween properties. Let's take a look at how versatile a tween can be.
Tween Props
tween accepts one optional argument, an object of tween properties. Let's take a look at some of the more commonly used props:
from/to
A tween can be between any two states. We define these with from and to.
Let's animate translateX by rewriting 'opacity' to 'x'. Then, pass from and to props:
tween({ from: 0, to: 300 })
Your ball now moves from left to right by 300px.
However, I said that a tween can be between two states, not just numbers. If we provide from and to objects of numbers and/or colors, we can animate multiple properties at once.
Try this:
tween({ from: { x: 0, background: '#198FE3' }, to: { x: 300, background: '#FF1C68' } }).start(ballStyler.set);
This is an easy way to animate multiple props simultaneously.
Duration
duration is defined in milliseconds. By default, a tween will take 300ms, but if we set duration to 1000, it'll take a second:
tween({ duration: 1000, from: 0, to: 300 }).start(ballStyler.set('x'));
Easing
Easing functions are used in tweening to change the rate of movement throughout the animation.
In real life, objects don't start or stop at their target velocity. Depending on the object, they gradually speed up, or gradually slow down, or both.
An easing function simply works by taking the tween's progress, defined as a number between 0 and 1, and returning a new one.
You don't need to know how to make these functions because Popmotion provides a bunch for you.
Import them:
const { easing, tween, styler } = popmotion;
By default, ease is set to easing.easeOut. When a function eases out, it means it starts fast and ends slow.
This was chosen as default because it's my belief that most animation in user interfaces should initiate as a result of a user's action. By starting fast and ending slow, the user will feel as if they imparted their energy, via their tap or click, directly into the interface. It feels snappy, alive, and responsive.
For many animations away from the user's input, or on their own, it can feel a little less jarring to use an animation that eases in, like easing.easeInOut or easing.anticipate, which does a playful tug before animating.
Finally, there's the easing.cubicBezier function, which creates a new easing function based on an easing curve, just like CSS transitions. This provides a massive degree of control and flexibility over your motion.
Try applying some of these to your animation while playing around with duration to see how it affects the feel and character of it.
Repeating
Animations can be repeated in three different ways: loop, yoyo, and flip.
Loop starts the animation from the start. Yoyo mirrors the tween by running it backwards. And flip runs it backwards and flips the easing function.
One of these can be set per tween, and each is set as a number that denotes the number of times to repeat the animation. To repeat forever, simply pass Infinity:
tween({ yoyo: Infinity, from: 0, to: 300 }).start(ballStyler.set('x'));
Playback
When a tween is started, it returns playback controls that we can use to control that animation.
const controls = tween().start(console.log);
In the above example, controls will have access to all of these playback methods, like stop, pause, and resume:
const controls = tween({ duration: 1000, from: 0, to: 300 }).start(ballStyler.set('x')); setTimeout(() => controls.stop(), 500);
We can use these playback controls to pause and then seek through the tween:
const controls = tween({ duration: 1000, from: 0, to: 300 }).start(ballStyler.set('x')); controls.pause(); controls.seek(0.5);
With this, we can create a scrubbable animation! In a later tutorial, we'll explore how to use Popmotion's pointer function to create a scrub bar, but for now you can scrub one tween with a second tween, to see this in action:
const controls = tween({ from: 0, to: 300 }).start(ballStyler.set('x')); controls.pause(); tween({ duration: 1000 }) .start(controls.seek);
Keyframes
For simple, a-to-b transitions, tween is excellent. For more complicated sequences of tweens, Popmotion provides another function called keyframes.
Let's import it now:
const { keyframes, easing, tween, styler } = popmotion;
keyframes tweens through a linear series of states. We provide these states to its values property:
keyframes({ values: [0, -150, 150, 0], duration: 2000 }).start(ballStyler.set('x'));
Like tween, we can also define these states as objects. So to move the ball around in a square, we can write:
keyframes({ values: [ { x: 0, y: 0 }, { x: -150, y: -150 }, { x: -150, y: 150 }, { x: 150, y: 150 }, { x: 150, y: -150 }, { x: 0, y: 0 } ], duration: 2000 }).start(ballStyler.set);
By default, keyframes will allocate each of these tweens an equal share of the overall duration.
By providing a times array, we can mark each of these states with a number between 0 and 1. 0 represents the start of the animation, and 1 represents the end:
keyframes({ values: [0, -150, 150, 0], times: [0, 0.1, 0.9, 1], duration: 2000 }).start(ballStyler.set('x'));
This way, we can adjust the length of the animation without having to remark each individual segment.
It also allows each animation to be given an individual easing with the easings property:
keyframes({ values: [0, -150, 150, 0], times: [0, 0.1, 0.9, 1], easings: [easing.easeIn, easing.linear, easing.easeOut], duration: 2000 }).start(ballStyler.set('x'));
Because keyframes is just a tween, we can adjust its overall playback with all the same properties like ease and loop, and control it with all the same methods that we learned earlier.
Conclusion
The tween and keyframes functions allow you to create both simple and complex animations.
styler brings its own benefits, like usage outside of animations, standardisation of CSS and SVG transform models, and render batching for high animation performance.
In this tutorial, we've covered just a couple of the animations that Popmotion offers. In the next installment, we're going to explore pointer tracking and velocity-based animations like physics and spring.
Velocity-based animations can be used to create natural-feeling UIs that react realistically to a user's input. See you there!
by Matt Perry via Envato Tuts+ Code https://ift.tt/2s0NULe
0 notes
Text
Introducing React Easy State
React Easy State is the fruition of my two years long journey with ES6 Proxies and meta programming. It is also a state management library for React.
The joy of simplicity
Easy State is a React state management library with no core philosophy - like functional programming or immutability - except for one thing. It aims to be as close to vanilla JavaScript as possible ... and it got pretty close. You can store your state in simple objects, which may be used and mutated in any way you want.
Behind the scenes the state stores are wrapped by ES6 Proxies, which intercept all basic operations - like property get and set - and add a touch of reactive magic. They register which part of which store is used in which component's render and automatically re-render the component when necessary.
The cool thing about Proxies is transparency. From your point of view none of this is visible. You just have to deal with plain objects and React components and let Easy State keep them in sync for you.
Coding a stopwatch
"Hello World!" is boring. Let's make a dumb stopwatch instead.
First we have to create a clock, which serves as our state store. It should save how many times it ticked so far and it should be startable and stoppable.
import { store } from 'react-easy-state' const clock = { ticks: 0, start () { this.intervalId = setInterval(() => this.ticks++, 10) }, stop () { this.intervalId = clearInterval(this.intervalId) } } export default store(clock)
As I promised you, this is vanilla JavaScript ... except for store. store is one of the two functions of Easy State and it wraps objects with transparent, reactive Proxies.
Rule #1: Always wrap state stores with store before you export them.
We will also need a view to display our clock. Let's go with the simplest option: a function component.
import React from 'react' import { view } from 'react-easy-state' import clock from './clock' function StopWatch () { const { ticks, start, stop } = clock return ( <div> <div>{ticks}</div> <button onClick={start}>Start</button> <button onClick={stop}>Stop</button> </div> ) } export default view(StopWatch)
Not much to explain. The clock store is a normal object, StopWatch is a normal React component. The only strange thing is view, which is the other function of Easy State. view turns your component reactive and re-renders it when a store property - used by its render - mutates.
Rule #2: Always wrap components with view before you export them.
You can try the live demo here.
I told you, it's dumb... Time to make it shiny.
Become a master watchmaker
We have to add a bunch of features to the clock store. It should display the elapsed ticks in a nicer format, it should know when it is ticking and it should be resettable.
import { store } from 'react-easy-state' import moment from 'moment' const clock = { ticks: 0, start () { this.intervalId = setInterval(() => this.ticks++, 10) }, stop () { this.intervalId = clearInterval(this.intervalId) }, get time () { const time = moment(0).millisecond(this.ticks * 10) return { seconds: time.format('mm:ss'), fraction: time.format('SS') } }, get isTicking () { return this.intervalId !== undefined }, toggle () { this.isTicking ? this.stop() : this.start() }, reset () { this.ticks = 0 this.stop() } } export default store(clock)
Nothing very surprising, just a bunch of new JS code. Keeping the store framework independent is a nice idea. It lowers the barrier of entry for new devs and makes switching frameworks easier.
Let's move on with the new StopWatch component. It displays the nicely formatted time from the clock and adds a reset functionality.
import React from 'react' import { view } from 'react-easy-state' import clock from './clock' function StopWatch () { const { time, toggle, reset, isTicking } = clock const label = isTicking ? 'Stop' : 'Start' return ( <div> <div>{time.seconds}<small>{time.fraction}</small></div> <button onClick={toggle}>{label}</button> <button onClick={reset}>Reset</button> </div> ) } export default view(StopWatch)
The live demo is here. Still not rocket science, but it can compete with the big guys. Google "stopwatch", if you don't believe me.
The two rules of Easy State
Hopefully you start to see the pattern form the above examples. Easy State has two simple rules:
Always wrap your state stores with store before you export them.
Always wrap your components with view before you export them.
Apart from these two, you have total freedom. You can access and manage your state however you want to.
Under the hood
Easy State is a stalker, it secretly keeps track of two things:
store tracks every property get and set operation on the state stores,
view tracks the currently running render function.
When a store property is used inside a render function, it is paired with the render and saved in a tuple. Later - when the same property is mutated - it looks up all of its saved renders and executes them. This way the view is always kept in sync with the state.
Let's move inside our stopwatch and see what's going on there.
StopWatch is rendered for the first time. view saves the fact that StopWatch is currently rendering.
StopWatch uses the time, isTicking, ticks and intervalId properties of the clock store during its render. All of these get operations are intercepted by the store Proxy and Easy State takes mental notes: StopWatch is using these properties to render.
The user presses the Start button, which starts an interval and sets intervalId. The set operation is intercepted by the store Proxy, which realizes that intervalId (and isTicking) changed. Easy State re-renders every component, which relies on these properties. In our case, this means StopWatch.
The interval increments ticks every 10 milliseconds. Easy State knows that StopWatch uses ticks and re-renders the component every time ticks is incremented.
Why should I try it?
Easy State is based on an old idea - called transparent reactive programming - which is used by VueJS and MobX for example. The innovation lies in the implementation, not the concept.
Both MobX and VueJS use ES5 getters and setters to track property access and mutation on the stores. This approach has limitations - like arrays and expando properties - which require workarounds from your side.
By using ES6 Proxies, Easy State can finally complete the magic of transparent reactivity. It can track anything from dynamic properties to delete operations, inherited properties, iteration, enumeration and property accessors. It won't ruin the reactive fun with exotic bugs and workarounds.
If this article captured your interest please help by sharing it. Also check out the Easy State repo and leave a star before you go.
Thank you!
Introducing React Easy State published first on https://medium.com/@koresol
0 notes
Text
Code refactoring techniques
Code refactoring is one of the key terms in software development and today I would like to talk about code refactoring techniques that might increase your efficiency!
But first, let’s agree on what is code refactoring! Basically, code refactoring is the process of changing a program’s source code without modifying its external functional behavior, in order to improve some of the nonfunctional attributes of the software. In other words, code refactoring is the process of clarifying and simplifying the design of existing code, without changing its behavior. Nowadays, agile software development is literally a must and agile teams are maintaining and extending their code a lot from iteration to iteration, and without continuous refactoring, this is hard to do. This is because un-refactored code tends to rot: unhealthy dependencies between classes or packages, bad allocation of class responsibilities, way too many responsibilities per method or class, duplicated code, and many other varieties of confusion and clutter. So, the advantages include improved code readability and reduced complexity; these can improve source-code maintainability and create a more expressive internal architecture.
Two of the most influential people in software development of recent times, Martin Fowler and Kent Beck wrote the book on the subject of refactoring called “Refactoring: Improving the Design of Existing Code”. I highly recommend you to read it, it is definitely worth it! This book describes the process of refactoring and spends most of its time explaining how to do the various refactorings – the behavior preserving transformations. In this book you will find simple example that describes the whole process. There are also broader issues around refactoring, the “code smells” that suggest refactoring, and the role of testing, which you will find in this book as well. What I like the most about this book is that seventy refactorings and code refactoring techniques are described in detail: the motivation for doing them, mechanics of how to do them safely and a simple example. A common question is whether the book is still relevant and this is a good question since technology has made many advances, but according to our experience in Apiumhub and listening to other software developers, the book is still very useful. For example, the code refactoring techniques described in this book have not changed, since they are part of the fundamental use of programming languages. Another reason why you should read it is that it is written by legends of our time, by people who actually tried it first and developed the concept! There are other interesting books about this topic and you can find them here, but this one is a high priority one.
Some tips for doing code refactoring techniques right
Code refactoring should be done as a series of small changes, each of which makes the existing code slightly better while still leaving the program in working order. Don’t mix a whole bunch of refactorings into one big change.
When you do refactoring, you should definitely do it using TDD and CI. Without being able to run those tests after each little step in a refactoring, you create a risk of introducing bugs.
The code should become cleaner.
New functionality should not be created during refactoring. Do not mix refactoring and direct development of new features. Try to separate these processes at least within the confines of individual commits.
Benefits of code refactoring
1. See the whole picture If you have one main method that handles all of the functionality, it’s most likely way too long and incredibly complex. But if it’s broken down into parts, it’s easy to see what is really being done.
2. Make it readable for your team Make it easy to understand for your peers, don’t write it for yourself, think on the long-term.
3. Maintainability Integration of updates and upgrades is a continuous process that is unavoidable and should be welcomed. When the codebase is unorganized and built on weak foundation, developers are often hesitant to make changes. But with code refactoring, organized code, the product will be built on a clean foundation and will be ready for future updates.
4. Efficiency Code refactoring may be considered as investment, but it gets good results. You reduce the effort required for future changes to the code, either by you or other developers, thus improving efficiency.
5. Reduce complexity Make it easier for you and your team to work on the project.
List of main code refactoring techniques
There are many code refactoring techniques and I do not want to cover them all, as this post would end up becoming a book in itself. So, I decided to pick the ones we feel are the most common and useful.
Red-green refactoring Lets start by briefly talking about the very popular red-green code refactoring technique. Red Green Refactor is the Agile engineering pattern which underpins Test Driven Development. Characterized by a “test-first” approach to design and implementation. This lays the foundation for all forms of refactoring. You incorporate refactoring into the test driven development cycle by starting with a failing “red” test, writing the simplest code possible to get the test to pass “green” and finally work on improving and enhancing your code while keeping the test “green”. This approach is about how one can seamlessly integrate refactoring into your overall development process and work towards keeping code clean. There are two distinct parts to this: writing code that adds a new function to your system, and improving the code that does this function. The important thing is to remember to not do both at the same time during the workflow.
Preparatory refactoring As a developer, there are things you can do to your codebase to make the building of your next feature a little more painless. Martin Fowler calls this preparatory refactoring. This again can be executed using the red-green technique described above. Preparatory refactoring can also involve paying down technical debt that was accumulated during the earlier phases of feature development. Even though the end-users may not see eye to eye with the engineering team on such efforts, the developers almost always appreciate the value of a good refactoring exercise.
Branching by abstraction refactoring Abstraction has its own group of refactoring techniques, primarily associated with moving functionality along the class inheritance hierarchy, creating new classes and interfaces, and replacing inheritance with delegation and vice versa. For example: Pull up field, pull up method, pull up constructor body, push down field, push down method, extract subclass, extract superclass, extract interface, collapse hierarchy, form template method, replace inheritance with delegation, replace delegation with Inheritance, etc. There are two types of refactoring efforts that is classified based on scope and complexity. Branching by abstraction is a technique that some of the teams use to take on large scale refactoring. The basic idea is to build an abstraction layer that wraps the part of the system that is to be refactored and the counterpart that is eventually going to replace it. For example: encapsulate field – force code to access the field with getter and setter methods, generalize type – create more general types to allow for more code sharing, replace type-checking code with state, replace conditional with polymorphism, etc.
Composing methods refactoring Much of refactoring is devoted to correctly composing methods. In most cases, excessively long methods are the root of all evil. The vagaries of code inside these methods conceal the execution logic and make the method extremely hard to understand and even harder to change. The code refactoring techniques in this group streamline methods, remove code duplication. Examples can be: extract method, inline method, extract variable, inline Temp, replace Temp with Query, split temporary variable, remove assignments to parameters, etc.
Moving features between objects refactoring These code refactoring techniques show how to safely move functionality between classes, create new classes, and hide implementation details from public access. For example: move method, move field, extract class, inline class, hide delegate, remove middle man, introduce foreign method, introduce local extension, etc.
Simplifying conditional expressions refactoring Conditionals tend to get more and more complicated in their logic over time, and there are yet more techniques to combat this as well. For example: consolidate conditional expression, consolidate duplicate conditional fragments, decompose conditional, replace conditional with polymorphism, remove control flag, replace nested conditional with guard clauses,etc.
Simplifying method calls refactoring These techniques make method calls simpler and easier to understand. This simplifies the interfaces for interaction between classes. For example: add parameter, remove parameter, rename method, separate query from modifier, parameterize Method, introduce parameter object, preserve whole object, remove setting method, replace parameter with explicit methods, replace parameter with method call, etc.
Breaking code apart into more logical pieces refactoring Componentization breaks code down into reusable semantic units that present clear, well-defined, simple-to-use interfaces. For example: extract class moves part of the code from an existing class into a new class, extract method, to turn part of a larger method into a new method. By breaking down code in smaller pieces, it is more easily understandable. This is also applicable to functions.
User Interface Refactoring A simple change to the UI retains its semantics, for example: align entry field, apply common button size, apply font, indicate format, reword in active voice and increase color contrast, etc.
Here are some examples of code refactoring techniques; some of them may only be applied to certain languages or language types. A longer list can be found in Martin Fowler’s refactoring book, which we discussed above.
And if you are interested in best practices in software development, I highly recommend you to subscribe to our monthly newsletter to receive latest software development books, tips, and upcoming events.
If you liked this article about code refactoring techniques, you might like …
Software development books to read in 2018
Why Kotlin language? Why did Google choose it?
iOS continuous integration with Fastlane & Jenkins
MVP pattern in iOS
Software architecture books to read this year
Scala Type bounds
F-bound over generic type of Scala
Charles Proxy in Android Emulator
Top software development blogs
Top software testing techniques & tools to use
A Guide to Tmux that will increase your productivity
Functional debt vs technical debt
BDD: UI Testing
Microservices vs monolithic architecture
SRP in Object oriented design
Almost infinit scalability
The post Code refactoring techniques appeared first on Apiumhub.
Code refactoring techniques published first on http://ift.tt/2w7iA1y
0 notes
Text
Using requestAnimationFrame with React Hooks
Animating with requestAnimationFrame should be easy, but if you haven’t read React’s documentation thoroughly then you will probably run into a few things that might cause you a headache. Here are three gotcha moments I learned the hard way.
TLDR: Pass an empty array as a second parameter for useEffect to avoid it running more than once and pass a function to your state’s setter function to make sure you always have the correct state. Also, use useRef for storing things like the timestamp and the request’s ID.
useRef is not only for DOM references
There are three ways to store variables within functional components:
We can define a simple const or let whose value will always be reinitialized with every component re-rendering.
We can use useState whose value persists across re-renderings, and if you change it, it will also trigger re-rendering.
We can use useRef.
The useRef hook is primarily used to access the DOM, but it’s more than that. It is a mutable object that persists a value across multiple re-renderings. It is really similar to the useState hook except you read and write its value through its .current property, and changing its value won’t re-render the component.
For instance, the example below will always show 5 even if the component is re-rendered by its parent.
function Component() { let variable = 5; setTimeout(() => { variable = variable + 3; }, 100) return <div>{variable}</div> }
...whereas this one will keep increasing the number by three and keeps re-rendering even if the parent does not change.
function Component() { const [variable, setVariable] = React.useState(5); setTimeout(() => { setVariable(variable + 3); }, 100) return <div>{variable}</div> }
And finally, this one returns five and won’t re-render. However, if the parent triggers a re-render then it will have an increased value every time (assuming the re-render happened after 100 milliseconds).
function Component() { const variable = React.useRef(5); setTimeout(() => { variable.current = variable.current + 3; }, 100) return <div>{variable.current}</div> }
If we have mutable values that we want to remember at the next or later renders and we don’t want them to trigger a re-render when they change, then we should use useRef. In our case, we will need the ever-changing request animation frame ID at cleanup, and if we animate based on the the time passed between cycles, then we need to remember the previous animation’s timestamp. These two variables should be stored as refs.
The side effects of useEffect
We can use the useEffect hook to initialize and cleanup our requests, though we want to make sure it only runs once; otherwise it’s really easy to end up doubling the amount of the animation frame requests with every animation cycle. Here’s a bad example:
function App() { const [state, setState] = React.useState(0) const requestRef = React.useRef() const animate = time => { // Change the state according to the animation requestRef.current = requestAnimationFrame(animate); } // DON’T DO THIS React.useEffect(() => { requestRef.current = requestAnimationFrame(animate); return () => cancelAnimationFrame(requestRef.current); }); return <div>{state}</div>; }
Why is it bad? If you run this, the useEffect will trigger the animate function that will both change the state and request a new animation frame. It sounds good, except that the state change will re-render the component by running the whole function again including the useEffect hook that will spin up a new request in parallel with the one that was already requested by the animate function in the previous cycle. This will ultimately end up in doubling our animation frame requests each cycle. Ideally, we only have 1 at a time. In the case above, if we assume 60 frame per second then we’ll have 1,152,921,504,606,847,000 animation frame requests in parallel after only one second.
To make sure the useEffect hook runs only once, we can pass an empty array as a second argument to it. Passing an empty array has a side-effect though, which avoids us from having the correct state during animation. The second argument is a list of changing values that the effect needs to react to. We don’t want to react to anything — we only want to initialize the animation — hence we have the empty array. But React will interpret this in a way that means this effect doesn’t have to be up to date with the state. And that includes the animate function because it was originally called from the effect. As a result, if we try to get the value of the state in the animate function, it will always be the initial value. If we want to change the state based on its previous value and the time passed, then it probably won’t work.
function App() { const [state, setState] = React.useState(0) const requestRef = React.useRef() const animate = time => { // The 'state' will always be the initial value here requestRef.current = requestAnimationFrame(animate); } React.useEffect(() => { requestRef.current = requestAnimationFrame(animate); return () => cancelAnimationFrame(requestRef.current); }, []); // Make sure the effect runs only once return <div>{state}</div>; }
The state’s setter function also accepts a function
There’s a way to use our latest state even if the useEffect hook locked our state to its initial value. The setter function of the useState hook can also accept a function. So instead of passing a value based on the current state as you probably would do most of the time:
setState(state + delta)
... you can also pass on a function that receives the previous value as a parameter. And, yes, that’s going to return the correct value even in our situation:
setState(prevState => prevState + delta)
Putting it all together
Here’s a simple example to wrap things up. We’re going to put all of the above together to create a counter that counts up to 100 then restarts from the beginning. Technical variables that we want to persist and mutate without re-rendering the whole component are stored with useRef. We made sure useEffect only runs once by passing an empty array as its second parameter. And we mutate the state by passing on a function to the setter of useState to make sure we always have the correct state.
See the Pen Using requestAnimationFrame with React hooks by Hunor Marton Borbely (@HunorMarton) on CodePen.
The post Using requestAnimationFrame with React Hooks appeared first on CSS-Tricks.
Using requestAnimationFrame with React Hooks published first on https://deskbysnafu.tumblr.com/
0 notes