#the caching behaviour got me for a while
Explore tagged Tumblr posts
Text
it would be a bridge too far to say that I understand how git submodules work now but I'm misunderstanding them less
8 notes
·
View notes
Text
evolution & everything happens for a reason.
Okay, so pretty much everyone since Darwin has heard about evolution by natural selection. BUT this does nothing to change the fact that it’s still such an interesting and exciting topic!! I’m not going to drone on about the theory of evolution - no, Charles did that for us. Instead, I really want to talk about how having some knowledge of that theory makes my time in nature that much more magical. In this way, I hope to bring the three guiding facets of interpretation together - education, recreation, and inspiration (Beck et al., 2018).
(Side note; I bought a copy of “On the origin of species” when I got accepted to UofG, and still have not managed to make my way through it. No hate to Darwin, but I think we could’ve taken some notes from this class to make that read a bit more engaging - jokes, of course. If any of you have read it in its entirety, I’d love to hear your thoughts…is it worth the read? did he include anything that would be deemed a “hot take” in our modern day?).
In biological studies, we come back to evolution all the time, and we blame it for nearly everything. At this point, I’ve learned the more mechanistic view of evolution, the misconceptions about it, and where we see it in ourselves and the rest of the biological world.
And yeah, makes sense, right?
But for me, it all really clicked last semester in my Animal Behaviour class, which pulled a lot of ideas from economics, cost and benefit, and the prisoner’s dilemma (cue loud groan). I know, I know, booooring.
But honestly, it really put it all into perspective for me – the grandiose concepts of evolution finally had a really solid foundation, such that the story of any natural sight I see is clearer in my mind.
Like, okay, why do parents take care of their young?
Silly question, right? But really think about it for a sec. Well, we know that offspring are genetically related to their parents – if a parent doesn’t take care of their young, the young (and the parent’s genes, and even potentially the act of providing for young) does not persist.
We also know that in some species, one parent (mother or father) puts way more energy into raising the young than the other parent does. Again, why? If they’re both equally related, why isn’t this behaviour equal between the two?
There are a lot of “it depends” here, but one example is that the mother can be 100% sure that those babies are hers, while the father can’t be quite as sure – what if the mama snuck off with another fellow and those kids don’t have any of the “father’s” genes?
Basically, to hedge his bets, the father doesn’t spend his energy on raising young, and instead spends it looking for other potential partners.
who woulda thought that evolution would explain why there's so much drama and gambling in the natural world??
A Friend in Need (1903) by Cassius Marcellus Coolidge
My other favourite example has to do with food caching behaviour in red squirrels vs. grey squirrels. Grey squirrels hide food all over the place, spreading out their cache. Red squirrels make one big stockpile. So, if a grey squirrel defends its caches, it wastes a ton of energy, almost for nothing. It physically couldn’t manage to guard all its nuts at once, so defending one cache leaves an opportunity for other caches to be robbed.
A red squirrel, though, benefits a lot from defending its cache. If it does, it stands a much higher chance of keeping itself fed through the winter, and if it doesn’t, it has lost all of the eggs from its single basket. This explains why red squirrels are the angry little guys they are – they aren’t just evil little devils who’ve escaped from hell. Instead, they just got out of their econ lecture!
Photo: https://www.flickr.com/photos/12144772@N06/1700328393
So, while it might seem that going through the mild pains of learning the theory and its economic/math-y/mechanistic intricacies would make nature as a whole feel less magical, I think it does the exact opposite. I feel like knowing these connections paints a really bright hue on my view of nature. “Why is that thing the way it is?” is such a cool, whimsical question to get caught up in, and I love it.
We've been educated, we've had some fun looking at some silly animal examples, and hopefully there was a hint of inspiration in here too!
Mother Nature really said “everything happens for a reason” and I think that’s super neat.
Anyone else have an "evolution epiphany" moment to share?
References
Beck, L., Cable, T. T., & Knudson, D. M. (2018). Chapter 3: Values to Individuals and Society. In Interpreting Cultural and Natural Heritage for a Better World (pp. 41-56). Sagamore Publishing.
4 notes
·
View notes
Text
what the hell am i on about? ok, it's simple, I'll explain. but first! you might want to read this novel from 1962 by james baldwin, this novel from 2022 by porpentine charity heartscape, and this absolutely brutal forced-detransition novel by my partner yvette that i just finished reading. ok, now -- you ok there buddy? yeah i know that was kind of intense, bear with me now. now i need you to get up to speed with the free energy model of the brain, maybe take a minute with deleuze and guattari there but not too long! - a summary of the schizophrenia thing will do, and we'll need to dip into the AI subculture for a bit to get the shoggoth meme, probably need janus's simulator theory to flesh that out, then maybe some light quantum mechanics, and if maybe you can read this piece i wrote about lsd inspired metaphors for thought and this one i wrote about roleplaying metaphors, that would save some explaining? while we're at it, let's cover the psychology experiments on introspection and confabulation.
ok so: the human brain is a loss-minimising predictive/generative dynamics model (a shoggoth, in the rat parlance), which outwardly exhibits a superposition of narratively defined character modes (in the sense of oscillation mode) or simulacra, which are excited differently depending on context-defining stimuli. the underlying mechanism is opaque to the simulacra, which must continually construct explanatory narratives to account for their own actions and feelings, as vividly depicted in baldwin's novel.
the evolving state that is being predicted against is a combination of internal and external, shaped by the caching and associative retrieval of memories which become inputs into the prediction/action/refinement process. we are bombarded by a profusion of possible self narratives exhibited by our perception of people around us and cultural representations, and continually selecting which ones to perform, which increasingly commits us towards a particular mode by the predictive system's drive to maintain consistency, over time congealing into 'personality' as memories accumulate. however, it is always possible for other modes to be excited by context, from a 'worksona' to a trauma flashback.
depending on the dynamics of a particular brain (e.g. autism), generating some types of character modes will come easier to it than others, a subject explored in charity and yvette's novels. it is possible for these different behaviour modes to diverge sufficiently as to associatively key into different sets of memories and hence develop distinct identities, which is termed plurality. but the superpositions are universal! everyone is always performing their "shoggoth"'s predictive idea of themselves, i.e. what their brain expects 'they' would do in this circumstance, as informed by the combination of associative memories and sensory input.
and then: 'adhd' is me generating with so many different self-models that "i" vacillate between them? the dynamics lead to modes activating and joining the superposition 'too easily' (no focus) or getting suppressed (hyperfocus)? something to do with 'temperature'? something to do with the flushing, or not, of working memory? this part needs to cook more, i can feel the shape of it, but it's not quite there for me yet. but this finally feels like a way to account for all the different 'wills'.
my train just got in so I'll leave it there for now lol
you're a shoggoth in a mask too!
31 notes
·
View notes
Text
i was manipulated
I’m not much of a writer, I’m much more a poet. Long well-thought out sentences never work well for me, but little short phrases that express how I feel usually work so much better in helping me to process my hurt. But for some reason that’s not been useful for me at all. I broke up with my ex about 6 weeks ago now and even though my world shouldn’t feel different at all, everything has changed. My ex was my favourite person in the entire world, I wanted to spend most of my time with them, I knew I wanted them to be in my life forever, I’d hoped it would be in a romantic capacity. We would have moved into a house somewhere in the countryside, adopt cats and dogs, and eventually adopt kids together. I’d built an entire life with them where I didn’t hurt anymore, far away from all the people who had ever hurt me, who had ever wanted to hurt me, far away from every place that housed painful memories within their walls. I gave everything to them, every ounce of the energy I had went to them, and I happily gave it all up. I thought that’s how relationships were meant to be, I gave 100% and shouldn’t have expected anything back. I shouldn’t have needed to expect it! 25% some days, maybe 50% on a good day, but it was okay. I had enough for the both of us! I was so very wrong.
Even as I write this, I’m thinking of how they would react if they read it, how they wouldn’t agree or how they would show their friends this and argue that I am painting them in an incorrect light, what I am saying isn’t true to the situation, that I’m overreacting or twisting the truth to suit my own agenda. And that’s so wrong. I should not have to tiptoe around someone who is living in my mind rent-free, should not be shielding their feelings in any way, shape or form from the truth. And this is how I know I was in a manipulative relationship, or at the very least, I was manipulated.
I didn’t recognise I was in a manipulative relationship until it was pointed out to me, and this seems to be a common theme of people in manipulative relationships. I was once asked if I thought they were manipulative, and I said no! I did not feel manipulated at all, had never seen any manipulative behaviour from them, so how on Earth could I think they were manipulative? This question stemmed from something their previous ex had said about them, and if I agreed with them, I was in massive trouble. Even if I had felt manipulated, how could I answer that question truthfully? They were very insecure about this ex, did not understand why they had been blocked from their life without any reasonable explanation, and so reacted in a way that they thought was appropriate as a result of this. (I recognise that is very vague, however I made a promise to myself that I will never expose intimate details about somebody’s life as it detrimental and unkind). And as such, this ex was intertwined within a much bigger situation. And so, asking them to not bring up their ex was completely out of the realm of possibility. Iwas then accused of telling them not to speak about the much bigger situation by asking them to not talk about this ex. I had asked them not to talk about this ex as it felt as though I was constantly being compared to them which was not good for me (as it would not be for many people).
It started with the little things, the smallest of things that my friends picked up on. “You seem sad, what’s happened?” “What’s happened now?” “You argued again? About what?” These questions were constantly asked, and I always had excuses. I ranted and raved to them, then got side-lined by my feelings (and calls and texts asking if we can just sort it out). “It’s just because I’m stressed.” “Things will be better when we see each other!” “I’m just frustrated because I miss you so much.” Pointless arguments. Constant crying. But it was all fine because we always sorted it out, we told each other no more arguments, this was the last straw. No more “second chances”.
My friends would ask me if I was happy, and I can say that I was. 100%. I was very happy, but I felt myself slipping. I felt pieces of myself disappearing, aspects of my life that used to make me happy no longer did. I withdrew from my friends; they didn’t understand that I love them! They didn’t see the parts I did, the loving, caring parts. The parts that would listen to me sob in the middle of the night, the parts that would tell me how much they loved me, how much I meant to them, how I was their soulmate and that they didn’t realise what love was until they met me. If I could show them those parts, then of course they would accept the relationship and accept that I was happy!
I withdrew from my family, going home for the weekend only to spend most of my time with my ex. I missed important moments, but it was okay! They asked me to spend time with my family and it was my choice not to! I now wish I had spent more time with my family, they missed me so much and I was so blinded by my love that I didn’t recognise that they need to be a priority. After spending all day with my ex, I was made to feel awful about the fact that I didn’t want to fall asleep on the phone with them. It was a “tradition” that we had forged together, one that was sacred to them, but was draining for me. Spending time with my family in the evenings was off-limits, them going to bed early to seemingly spite me. Or saying they were going to bed so I should call them to say goodnight. Only to end up on the phone with them until it would just be silly to stay up and do something with my family, so “might as well” just go to bed at the same time as them. I now recognise that I was manipulated into believing that I had the choice, any decision that I made was heavily influenced by them. And it should not have been.
Manipulated is a big word, but the research that I have done after the fact leads me to believe that I was manipulated, or there was an element of manipulation present within the relationship. I was essentially isolated, feeling guilty for spending time with my friends or going out with my friends. I was not wearing clothes that they liked when I went out, them saying they “trusted me but not other people”. Them saying that they didn’t think my friends liked them, until I stopped spending as much time as I would’ve liked with them. I was once asked if I had logged into their accounts without them knowing, them saying they trusted me but “just wanted to make sure”. I often felt as though I was going insane, began doubting my own sanity at times as I was being gaslighted, about the smallest things. “No, you didn’t tell me that.” “You’ve only told me about that once.” “You have never brought that up in an argument before.”. There were a lot of other more intimate situations were I was felt as though I was being manipulated however this is too hard for me to talk about right now. All of these things being small in isolation, but once you see the bigger picture, it becomes all so much clearer.
Their friends think I am crazy, think that I am irresponsible and do not own up to mistakes that I have made and accept responsibility. And they are entitled to this opinion. I believe that with the information they possess about me, that is a logical conclusion to come to. But the issue is that all the information that they have access to and have had access to is through my ex. The limited interactions I have had with their friends does not yield for an impressive cache of information about me straight from the source. I got yelled at by one of their friends, and she demanded that I give her my address so that my things could be given back to me. Now this invasion of privacy may sound insane, however this is something that I am used to. Shortly after the breakup, information about my eating disorder was given to the friend in question. And when confronted about it, excuses were made, and a half apology was given. The point must be made that if my ex was able to so freely give out that kind of information after the breakup, what kind of information was being given while we were together? An intimate situation like an eating disorder was clearly not off limits, the question of what would have been off-limits needs to be posed. Where was the line drawn? What was just knowledge to be freely tossed into everyday conversation?
My feelings were seemingly too much, I can think of many moments where my feelings were pushed aside and dwelled on as an afterthought, or not dwelled on at all. When this was brought up, it was my fault and among all the sobs I heard from them, I believed I was being too harsh and that it was not fair that I was also side-lining their feelings. But eventually I recognised that this was wrong, and I did not stand for it any longer. I suppose I should have realised there was a problem then and there, when the times that I had said “I didn’t want to tell you this before, but I need to talk about it” piled up like things on my to-do list. Or when the times that I lay crying silent tears in bed turned from every once in a while, to every couple of weeks to most days. It was a problem that I refused to accept, despite my friends telling me that I needed to recognise it as the bad sign it was. I also remember of a moment a few days before our breakup where a comment about me wearing a dress that I loved so much was made, I will never forget the sinking feeling when I heard it, and the echoes of the recognising signs of an abusive relationship talk that I had rang loud, more distant echoes of a talk I had a school ringing warning bells in my head. It was brushed off as quickly as I brought it up, told that I needed to stop being so mopey and that I was ruining the night by being sad. Rushed sympathetic looks from me and I pulled myself together. It was like that for a lot of things, I recognise this now. I am disappointed with myself, “I am a nurse for crying out loud” I scream to myself. “This should not have happened to me!” I cry in a pillow. “I should’ve known better.” I deadpan to friends.
As I said earlier, I was so happy most of the time. I do not wish to convey that I wasn’t, there were many good times and they weren’t manipulative all the time. I was in control at times, I made mistakes, I fucked up, I was an awful person. I am not saying that this person is bad, I am not saying that their heart is devoid of love, I do believe that good people can do bad things, but that does not mean that they are 100% bad. There are many happy memories that I will cherish, many times where I felt as though I had found the right person for me, that I had found my soulmate. And I will never forget or regret all the times we spent together in each other’s company, content and not wanting for a single thing. In the beginning we spent hours just talking about anything, I would laugh and laugh and feel as though I was the luckiest person in the world. I had never smiled bigger, I had prayed for happiness and God had given me what I had wanted, with the addition of another person who could love me with all their heart. My point is, I had never known joy like it, but I had also never felt heartache like it. The highs and the lows were deafening. But the highs were there, so I did believe that they were worth all the lows.
I feel guilt even after the breakup, so much so that I felt sick with grief and guilt because of what I had done. I was the one to have finished things completely (although we do not agree on that detail) and as such held a lot of guilt for being the one to turn my back on the relationship so to speak. This guilt continued as I tried to be friends with them, all coming to a head when a talk in a coffee shop lead to a screaming match and them storming off. A further continuation during a text message conversation where I was made to feel guilty about the fact that they are “crying all the time”, worrying that they will never “be happy again” or “fall in love again” or “be intimate with anyone ever again.”. One of the last text messages them apologising for “if they’ve ever made me feel guilty”. I responded with an appropriate “it’s not IF you ever did, you did.” It was very hard for me to have a backbone, I worried that I was being too harsh for days after the fact. Suffice to say, we no longer speak. I even feel guilty for having written this, for even considering putting it online for others to see. I suppose there is no reason for me to put this online but perhaps I need to. Perhaps it will help someone else that may be in a relationship where objectively they are happy, but they are worried about things. Or they do not know if they are being manipulated. Perhaps this can help someone make the first step towards leaving a relationship that is unhappy. Perhaps this can help someone talk to friends and family or someone they trust about their worries.
The thing I must stress is that there is no blanket definition for a manipulative relationship, but there are a multitude of resources that can help you take the right steps if you are worried or scared in your relationship.
All of this leads me to say that I am hoping to not have anger anymore, not to hold on to the things that keep me awake up night, the things that I worry about for future relationships. I am scared to even entertain the idea of being with another person, for someone to know the ins and outs of my life, scared to smile for fear that I’m doing something wrong, or disappointing someone. I can let go of the anger, but it’s the anxiety that I don’t think I can let go of. Or the immense sadness that I feel when I hear their name, the nauseous feeling I get when I remember the bad times, or the happiness I felt when I woke in the middle of the night to see them next to me. My bed feels bigger somehow, like there’s too much room. No longer am I cramped up against a wall but habit dictates that I somehow wake up pinned up or arms outstretched expecting to feel another body next to me. I almost feel guilty about the fact that I am dating, no dates just yet but I am speaking to people and expanding my horizons. I yearn for one person, but it is wrong for me to do so, and detrimental to the both of us, for I was not the best girlfriend, I had flaws of my own which I will not deny at all. I feel the anger inside of me but it’s not healthy for me to dwell on it, I felt the waves of anger crash and riddle my mind with thoughts, but the best thing for me to is to let it pass through as if it is a train not due to make a stop. I can see the thoughts and anger and recognise that they are there, but they do not need to be so prevalent within me. I have made my peace with everything that has happened, and I have learnt some very hard lessons. The only person I am to please now is myself, as much as it may hurt me to do so as I am a massive people pleaser.
So, even though I was in a manipulative relationship, it does not define me. It does not have to be my story; I don’t need to think about it ever again if I do not wish to. But what I will think about is red flags, and what I will do is listen to the people who I love and consider their opinion of anyone I wish to bring into my life. Most importantly, I will listen to myself and not let myself get into this position again, being much more careful with the people I allow to become close to and recognising that there are warning signs. I may miss the signs as I fall for this hypothetical person, but hopefully as I listen to my friends and listen to my own gut, I won’t ignore them when they scream out at me. My manipulative relationship was full of great times as well as bad times and so I didn’t realise that it was happening, it does not have to be all tears and arguments for it to be manipulative, or controlling or abusive.
I finish this with something I read as I was researching, anyone can fall into a manipulative relationship, no matter how smart, savvy or feminist you are - and realising that you’re in one doesn’t make you any less smart, savvy or feminist. It is not a reflection on me that this happened, and it is not my fault that this happened. It will never be my fault.
6 notes
·
View notes
Text
Psychopolitics and Surveillance Capitalism
I queued this post quite a while ago and it posted last night while I was asleep. I’m reposting because I’ve been thinking about this a bit more since I first saw it. I’ve shortened the original quote here:
[H]ealing ... refers to self-optimization that is supposed to therapeutically eliminate any and all functional weakness or mental obstacle in the name of efficiency and performance. Yet perpetual self-optimization ... amounts to total self-exploitation. [...] The neoliberal subject is running aground on the imperative of self optimization, that is, on the compulsion always to achieve more and more. Healing, it turns out, means killing.
and had a look at this review. From the review: “[W]hat capitalism realised in the neoliberal era, Han argues, is that it didn’t need to be tough, but seductive. This is what he calls smartpolitics. Instead of saying no, it says yes: instead of denying us with commandments, discipline and shortages, it seems to allow us to buy what we want when we want, become what we want and realise our dream of freedom. “Instead of forbidding and depriving it works through pleasing and fulfilling. Instead of making people compliant, it seeks to make them dependent.”
I’m adding a break because this got long.
(review, cont’d)
And, while not Orwellian, we net-worked moderns have our own Newspeak. Freedom, for instance, means coercion. Microsoft’s early ad slogan was “Where do you want to go today?”, evoking a world of boundless possibility. That boundlessness was a lie, Han argues: “Today, unbounded freedom and communication are switching over into total control and surveillance … We had just freed ourselves from the disciplinary panopticon – then threw ourselves into a new and even more efficient panopticon.” And one, it might be added, that needs no watchman, since even the diabolical geniuses of neoliberalism – Mark Zuckerberg and Jeff Bezos – don’t have to play Big Brother. They are diabolical precisely because they got us to play that role ourselves.
At least in Nineteen Eighty-Four, nobody felt free. In 2017, for Han, everybody feels free, which is the problem. “Of our own free will, we put any and all conceivable information about ourselves on the internet, without having the slightest idea who knows what, when or in what occasion. This lack of control represents a crisis of freedom to be taken seriously.”
“Did we really want to be free?” asks Han. Perhaps, he muses, true freedom is an intolerable burden and so we invented God in order to be guilty and in debt to something. That’s why, having killed God, we invented capitalism. Like God, only more efficiently, capitalism makes us feel guilty for our failings and, you may well have noticed, encourages us to be deep in immobilising debt.”
I think I’m going to get this book. This would make a great pairing with Surveillance Capitalism by Shoshana Zuboff. (I’ve linked to a review; the book is available on Amazon and elsewhere). I have the book but haven’t read it yet. Think about this:
“Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioural data. Although some of these data are applied to service improvement, the rest are declared as a proprietary behavioural surplus, fed into advanced manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, and later. Finally, these prediction products are traded in a new kind of marketplace that I call behavioural futures markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are willing to lay bets on our future behaviour.”
From the review: “The combination of state surveillance and its capitalist counterpart means that digital technology is separating the citizens in all societies into two groups: the watchers (invisible, unknown and unaccountable) and the watched. This has profound consequences for democracy because asymmetry of knowledge translates into asymmetries of power. But whereas most democratic societies have at least some degree of oversight of state surveillance, we currently have almost no regulatory oversight of its privatised counterpart”.
Part of my job is related to the regulatory oversight of the private sector, and I definitely think that it is an absolute mess. Countries have vastly different rules, but data doesn’t respect borders. Different countries have different goals. The EU’s data laws protect the individual. China’s data laws protect the state. The US’s data laws protect the economy. (With a few exceptions, the laws are really about what can be monetized and what can’t.)
So what is an individual supposed to do? I struggle with the best way to protect my own privacy and personal data, and to teach my teens to do the same, let alone put it into a socio-political context.
I don’t think it’s possible to completely opt out of the surveillance and participate in modern life. It’s a bit easier for old people like me to opt out but I see younger people whose peer group socialises to such an extent through apps and phones (snapchat, instagram, etc). The problem is that if they are not on these platforms, they are to a very large extent excluded from social life, and humans are social animals. It’s not healthy for them to be isolated.
OTOH, we can make some choices. For example, I have a Facebook account (I have 3, actually), but the one with my real name is just for an online course that uses a FB group for discussion. One is for testing. One is my “real” account that does not use my real name where I keep in touch with family since I live half a world away. I log out every time I use it. I never gave FB my phone number or location or work place or hometown etc etc. I opted out of any advertising that I could, particularly adverts using my own Likes. I opted out of all third party platforms so I cannot accidentally log into a third party site with FB. I do not upload photos of my children. I cannot be tagged. I opted out of facial recognition. I check settings once a week in case they are “accidentally” reset. I check after upgrades and so forth. I don’t use FB messenger. I don’t use the FB app. I log out and clear my cache and cookies regularly. I download all of my FB data from time to time (I think a lot of people did this after the Cambridge Analytica scandal) and check that it’s accurate and I’m ok with what’s out there. (btw, one of my professional highlights was writing about Cambridge Analytica in 2017, before the scandals broke in early 2018, w00t).
Also, I do not have any google accounts. At all. I don’t use gmail. I cannot sign into google maps. If someone sends me a google doc for editing, I ask for a copy, edit it and send it back. (This is rarely an issue though; I think it’s happened twice.) I used to have a Youtube account, and when they changed the settings to log in with a google account and not just an email, I created an account on a separate computer, logged in, deleted all of my videos and then deleted my youtube account, then deleted my google account and then cleared my cookies and cache. I think this was 2008.
But truth be told, this is not much. I know that. Amazon knows which Audible books I listen to, which Kindle books I read, and which paperbacks I buy. It goes on and on.
Is there a balance? Are our choices to opt in -submit- to this surveillance or live off the grid? This isn’t simply a matter of updating data privacy laws. The issues that need to be resolved underpin the entire economy and political order.
Food for thought, anyway. (So how’s your quarantine going?)
2 notes
·
View notes
Text
A Bountiful Harvest
Hey there, the sins of Mary Poppins. Let's see... I'm still holding that MLP review hostage. Really, trust me on this one! But what to do in the meantime? We've done a bunch of Suicide Squad lately, so let's do the other one. Anyone remember the other one~?
Here's the other one's cover:

Hey, I said we weren't doing Suicide Squad this week! Nice and wintery for the holidays, though, right? Can't be great for Croc. Also, whose hand is that in the foreground? Enchantress and Katana aren't in this issue, and everyone else is accounted for. No one wears bracelets like that. Whose hand is this??
So we open in the Arctic, I presume. It says "somewhere on the top of the world", so it sounds plausible. Killer Croc and Captain Boomerang are complaining about the weather, but Jason and Artemis are finally in agreement on something: it's just snow, suck it up, weenies. Gotta say, the Arctic is not just snow, honestly. What sad times are these that I have to agree with anything Boomerang says. Bizarro tells them all to quieten up a bit, everything's fine. As a side note, this issue is titled "#MissionCreep". So you know, that's annoying. Bizarro heat-visions a spot in the ground open, revealing the secret entrance to Harvest's base. Oh yeah, that's what we were doing~
Even once inside, the Suicide Squad continues to complain, to Artemis' (and the reader's) continued irritation. There's too much interference to contact Waller, so they're on their own. Bizarro quickly divides them up into teams to save time. Jason and Croc will go to find any abandoned resources, Artemis and Harley are to look for survivors, and Deadshot, Bizarro, and Boomerang will shut down the energy core before it explodes. Deadshot grouses that Bizarro's not the boss of them (and he's not so big), but Harley is the actual team leader, and she chooses to defer to Bizarro. So, I guess suck it, Deadshot~
Now here's a bit I like quite a lot. Croc and Jason get to chatting as they go off to their destination, and Jason asks how Roy is. The fact that Killer Croc is Roy's Alcoholics Anonymous sponsor continues to be one of my most favourite things in comics, and I'm glad it still comes up every now and again. Croc tells Jason that Roy calls him up whenever he's thinking about drinking, which is unfortunately frequent these days. Jason agrees that it doesn't sound healthy, but are interrupted as they find something else unhealthy: the body of Harvest. Yes, in spite of his near omnipotence in The Culling, Harvest has been killed and crucified. Jason is concerned, but Croc says it's not any of their business and they should just grab the loot and go. Harvest pissed off someone fierce, and my guess it's probably anyone who actually read The Culling~
Meanwhile, Artemis and Harley do some unnecessary acrobatics to get to where they're going. Harley keeps blathering on, as she does, and Artemis gets more and more irritated, as she does. Artemis stops Harley from going any further, mostly so she can use her vocabulary word for the day, "sacrosanct". Look it up~! Before anything actually interesting can happen on their front, the scene shifts over to Bizarro and his pals, who are in front of an enormous laser grid. Deadshot and Boomerang wonder why he even needs them, and Bizarro replies that Harvest built his base to keep it safe from Superboy, so obviously he can't go in there. He'll tell the other two what to do, and if they follow his instructions, they might not even die~
Back in the storeroom, Jason is geeking out about the cool guns. Croc suddenly gets rather philosophical and asks Jason if doing damage is really what he cares about. He thinks Jason's a good kid who's trying to appear bad. But if you continue down a path of pretending to be bad, you'll actually come out bad. Croc tells him to remember that he has friends like Artemis and Bizarro, and even Roy and Starfire. Jason's very quiet for a while, and then asks if Croc wants to go halvsies on these weapons. Croc says halvsies is fine. Good stuff~
Artemis and Harley, however, aren't getting along quite as well. Artemis choke-slams Harley into a wall for disrespecting the ground on which kids have died. Harley says that she doesn't have the freedom to believe in things like sanctity. Artemis drops her, then sits down to pray for the dead. Harley comes up behind her like she's going to hit Artemis with her mallet. Without even opening her eyes, Artemis tells her if Harley hits her, she will shove that hammer right up her arse. I'm not even exaggerating that, it's literally what she says. Harley drops her hammer and pouts, while also complaining "promises, promises". Harley, don't be a weird pervert.
As Boomerang and Deadshot dodge and weave through the lasers, they notice that Bizarro has actually already left. Wow, even he couldn't stand to be around Boomerang's whinging. They shoot out the core with their respective weapons, and it explodes. As soon as it's gone, Bizarro suddenly reappears, putting his arms around both their shoulders, and apologising for leaving. See, the room was also putting out Kryptonite wavelengths, so he didn't want to hang around. In spite of doing nothing and complaining and threatening Bizarro while he's gone, Boomerang replies "No worries, mate" as soon as he's back, which is all of Captain Boomerang's personality in a nutshell. Well, at least he didn't shit himself this issue~
And... that's about it! The core is destroyed, and everyone just returns to Belle Reve via Bizarro's magic door. Waller doesn't let the Outlaws into the prison, though. At least, not until they've actually been arrested and interred under her care, anyway. She got some weapons out of it and the world isn't destroyed, but she's no babysitter. The Outlaws don't get a word in edgewise, and just go home. In an epilogue, while Jason catalogues his new cache, Artemis enters and continues to raise concerns about Bizarro's new behaviour. Jason replies that she just kind of misses that Bizarro doesn't need them to look after him anymore. She says this is the only part of this she does like. Bizarro's grown up, and the comic ends by showing us he's even brooding on rooftops now~
It’s not a bad issue, really. The Suicide Squad gang is honestly more enjoyable interacting with other people than they ever are with just each other. The stuff with Croc and Jason is really good character work, and I love that they bond over their mutual friendship with Roy. Also, Harvest was so annoying that I don’t even care that he died off panel. Or maybe it was back in Teen Titans. Who cares~? Anyways, it’s good stuff, and if they did more character pieces like this, Suicide Squad might actually be enjoyable. Reap the benefits of crossovers, guys~!
6 notes
·
View notes
Text
The Daily Tulip
The Daily Tulip – News From Around The World
Monday 27th August 2018
Good Morning Gentle Reader…. I hope you slept well and have woken full of the joys of summer… The meteors are quite beautiful this morning, I stood and watched with Bella as streaks of fire zipped across the night sky, then together we walked back to the house, Bella thinking about the cookie I promised her and me looking forward to the fresh Colombian Coffee that was brewing while we walked, it’s 3 years ago that we lost Sadie and almost 5 years since Mackie left us, but I believe they still walk the town with us in the mornings……So before I start getting all maudlin on us, let’s take a look at what’s happened in this mixed up world we call Earth….
DRUG TUNNEL RAN FROM OLD KFC IN ARIZONA TO MEXICO BEDROOM…. US authorities have found a secret drug tunnel stretching from a former KFC in the state of Arizona to Mexico. The 600ft (180m) passageway was in the basement of the old restaurant in San Luis, leading under the border to a home in San Luis Rio Colorado. Authorities made the discovery last week and have arrested the southern Arizona building's owner. They were alerted to the tunnel after the suspect, Ivan Lopez, was pulled over, according to KYMA News. During the traffic stop, police dogs reportedly led officers to two containers of hard narcotics with a street value of more than $1m in Lopez's vehicle. Investigators say the containers held 118kg (260lb) of methamphetamine, six grams of cocaine, 3kg of fentanyl, and 21kg of heroin. Agents searched Lopez's home and his old KFC, discovering the tunnel's entrance in the kitchen of the former fast-food joint. The passageway was 22ft deep, 5ft tall and 3ft wide, and ended at a trap door under a bed in a home in Mexico, said US officials. The drugs are believed to have been pulled up through the tunnel with a rope. This is not the first such discovery - two years ago a 2,600ft tunnel was found by authorities in San Diego, California. Authorities said it was one of the longest such drug tunnels ever discovered, used to transport an "unprecedented cache" of cocaine and marijuana. In July alone, US Border Patrol seized 15kg of heroin, 24lbs of cocaine, 327kg of methamphetamine and 1,900kg of marijuana at border checkpoints nationwide… Comment: Now we know what the “Secret” ingredient is in the Col’s recipe…..
TRUMP ADMINISTRATION 'CONSIDERS FUNDING GUNS IN SCHOOLS'…. The Trump administration is considering allowing schools to access federal education funding to purchase guns for teachers, US media report. The Department of Education (DoE) is looking at allowing states to use academic enrichment funds for firearms, the New York Times first reported. The federal grant being considered for this purpose is one that does not specifically prohibit buying weapons. Congress forbids using federal funds for school safety to purchase weapons. DoE spokeswoman Elizabeth Hill told CBS News: "The department is constantly considering and evaluating policy issues, particularly issues related to school safety." "The secretary nor the department issues opinions on hypothetical scenarios," she added… Comment: I cannot think of a more irresponsible action on behalf of a government…
AUSTRIA REJECTS 'GIRLISH' IRAQI ASYLUM SEEKER…. Austrian officials rejected an Iraqi migrant's asylum application because he was too "girlish", local media say. The 27-year-old's claim to be gay was deemed "unbelievable", in part due to his behaviour, according to reports. He can appeal against the decision. It comes just days after Amnesty International criticised Austria's asylum processes as "dubious". The government has hit back at the criticism, saying its asylum officials work appropriately. In the latest case, the Iraqi asylum seeker was felt to exhibit "stereotypical, in any case excessive 'girlish' behaviour (expressions, gestures)", which seemed fake, Austria's Kurier newspaper reported. Said to be an active member in local LGBT groups, he is understood to have fled Iraq in 2015, fearing for his life. However a spokesman for Austria's asylum office said the decision had been reviewed, and rejected the accusation it contained any "clichéd phrasing" by officials in Styria state, Kurier added. It is the second controversial asylum case in recent days. Last week, activists said that an 18-year-old Afghan asylum seeker had his application rejected because he did not "act or dress" like a homosexual. "The inhuman language in asylum claims does not conform with the requirements of a fair, rule-of-law procedure," Amnesty International said in a report. Interior Ministry spokesman Christoph Poelzl also rejected the accusation officials used "inhuman" language, telling news agency AFP that all employees who assess asylum claims receive training. However, the official involved in the Afghan asylum seeker's case is no longer involved in assessing applications, he added. Austria is currently run by a coalition of the conservative People's Party and the far-right Freedom Party, which came to power following an election dominated by Europe's migrant crisis last year. Comment: Immigration should not be based on “Sexual Preference” especially when determined by the “Far Right” party…
CHINA ARRESTS OVER TANG DYNASTY RELIC THEFTS…. Chinese police have arrested 26 people suspected of stealing relics from an ancient burial site. The gang allegedly seized almost 650 objects, including gold and silver cutlery and jewellery, from the Dulan Tombs, which lie on the ancient Silk Road in northwest China. The stolen items date back to the 7th Century, the Chinese Ministry of Public Security said in a statement. The suspects allegedly tried to sell them for about $11m (£7.8m). The objects were said to have been illegally excavated from the tombs, located in the north-western province of Qinghai. Silk, gold, silver, bronze ware and other items have been unearthed at the tombs, of which there are more than 2,000, since 1982. Experts believe that many of the items are of huge historical value as they show cultural exchanges and interactions between East and West during the early Tang Dynasty (618-907). Following the arrests, police will increase their crackdown on cultural relics crimes to better protect the country's cultural heritage, the Chinese government said. (See photographs at https://www.facebook.com/groups/OurPastBeneathOurFeet/ )
INDIA'S 'BIGGEST' PET RESCUE OPERATION IN KERALA FLOODS…. When rescuers in India's flood-ravaged southern state of Kerala reached a flooded hut in the city of Thrissur, the couple living there refused to leave without their 25 dogs. The water was rising, and the dogs were huddled on a single bed. The rescue workers had arrived on boats, and Sunitha, who uses only one name, flatly told them she and her husband would not leave without their stray and abandoned pets. "Our neighbours had been moved to schools and camps nearby. Rescue workers said that we could not bring our dogs to the relief camp," she said. So the workers went back and got in touch with an animal rescue group. Sally Varma of Humane Society International told the BBC that their volunteers arrived soon, and arranged for the dogs to be taken to a special shelter for affected animals. Ms Varma said she has started a fundraiser for the family and its pets so a kennel could be built at their home after the floods recede. Nearly 400 people have died in the worst flooding Kerala has witnessed in a century. Thousands remain stranded. More than one million people have been displaced, with many of them taking shelter in thousands of relief camps across the state. But what is striking is how hundreds of animals are being rescued in the affected areas. In what appears to be one of the biggest animal rescue operations during a natural calamity in India, hundreds of volunteers and animal rescue workers have travelled to flood-affected areas. Social media is awash with dramatic rescue videos: a rescuer removing his life jacket and putting it on a Labrador to help it swim to higher ground; drenched dogs being taken out of flooded homes and kennels; and country boats and inflatable rafts carrying dogs, goats and cats to safety. Rescuers have waded through water, and travelled on boats and rafts to treat, feed and rescue hundreds of animals - dogs, cats, goats, cows, cattle, ducks, and even snakes - as the waters have begun receding. Trucks with animal feed and medicines are reaching affected districts. Some animals have been moved to shelter camps, and others to higher ground. A number of animal rescue help lines have been set up, and rescuers are using WhatsApp and social media to respond to calls. "We are getting more than 100 calls a day on our helpline. The number of animals that have been moved to higher ground and rescued must be in hundreds," Anand Shiva of Kerala Animal Rescue said.
Well Gentle Reader I hope you enjoyed our look at the news from around the world this, morning… …
Our Tulips today are from India, where the Tulip garden with the backdrop of the mighty Zabarwan range of mountains was thrown open for the public, on Sunday by Minister for Floriculture Javid Mustafa Mir.

Asia’s largest tulip garden, which has about 12.5 lakh tulips of 50 varieties in its lap on the banks of world famous Dal Lake in the summer capital, Srinagar, marked the beginning of new tourism season in the Valley.
A Sincere Thank You for your company and Thank You for your likes and comments I love them and always try to reply, so please keep them coming, it's always good fun, As is my custom, I will go and get myself another mug of "Colombian" Coffee and wish you a safe Monday 27th August 2018 from my home on the southern coast of Spain, where the blue waters of the Alboran Sea washes the coast of Africa and Europe and the smell of the night blooming Jasmine and Honeysuckle fills the air…and a crazy old guy and his dog Bella go out for a walk at 4:00 am…on the streets of Estepona…
All good stuff....But remember it’s a dangerous world we live in
Be safe out there…
Robert McAngus #Spain #India #China #USA #Bella
1 note
·
View note
Text
I always feel weirded out when I see transgirls writing about how much they distrust/dislike men, because it seems like the kind of thing one might do if you’re trying to distance yourself from men in other people’s eyes, so I can never tell if it’s due to actual antipathy vs throwing men under the bus because it’s hard to have people take your gender seriously otherwise.
Which is shitty and definitely cisnormativity fucking your shit, because you shouldn’t have to hate a group of people to convince others to stop counting you as part of them. And also doing so hurts the population you’re trying to distance yourself from. And like the very direct harm of being misgendered is bad enough that I’ll forgive transgirls the dispersed harm of saying “All men are bastards” or something, but it’s still not very nice.
Anyway, that wasn’t what I was going to write. Writing while high is mostly hard for memory reasons because it’s hard to follow the same train of thought. But yeah:
I’m starting to think I might have a little bit of the same kind of sexism? But like not endorsed or anything. I don’t think men are bad in some fundamental way. I think men are scary because I don’t understand them. I have no fucking clue what’s going on inside because that’s not the thought pattern that comes naturally to me.
Like I worked really hard to learn how to pass as male to not get stabbed or something and the whole time it was super behaviourist. Like “Oh, yeah, men talk like this. Why? ¿¿¿¿Who knows????”. So just modeling at the behavioural level and ending up with the cached thought that men are Inherently Mysterious and will Never Be Understood.
And that’s obviously just me with my biases but it does lead to a “Men are SCARY” thing because what are they even thinking right now??? I usually feel like I know what the women around me think, which might be somewhat overconfident but w/e. I feel like I’ve probably got a handle on shit and nothing unexpected will happen. I don’t feel that way around guys.
Under this line should be skipped since it won’t make any sense to you and will only be recognisable to me to see when I’m sober.
And I think that is what most explains why I’m more afraid of men in [situation] than women in [situation]. Because I expect that, if a woman intends malice, I’ll at least recognise it and know what to do. But how do I know if a man intends malice? Maybe he did from the very start? How do I know? How do I react?
Unfortunately, distributions of the how mean that men are more likely to be in [situation] than women, so as a whole I’m just 100% worried about and don’t want to deal with [situation]. Besides, I don’t want trying things that could be explody so nvm. Like, what do I do if it still doesn’t work? Say that? And die I guess. ‘sides, “women only” is one of the wrong rules as has been well documented in History by everyone.
So this doesn’t help one bit and I honestly shouldn’t have written it down but it seemed important to have in words.
20 notes
·
View notes
Text
Version 425
youtube
windows
zip
exe
macOS
app
linux
tar.gz
I had a good week. I optimised and fixed several core systems.
faster
I messed up last week with one autocomplete query, and as a result, when searching the PTR in 'all known files', which typically happens in the 'manage tags' dialog, all queries had 2-6 seconds lag! I figured out what went wrong, and now autocomplete should be working fast everywhere. My test situation went from 2.5 seconds to 58ms! Sorry for the trouble here, this was driving me nuts as well.
I also worked on tag processing. Thank you to the users who have sent in profiles and other info since the display cache came in. A great deal of overhead and inefficient is reduced, so tag processing should be faster for almost all situations.
The 'system:number of tags' query now has much better cancelability. It still wasn't great last week, so I gave it another go. If you do a bare 'system:num tags > 4' or something and it is taking ages, stopping or changing the search should now just take a couple seconds. It also won't blat your memory as much, if you go really big.
And lastly, the 'session' and 'bandwidth' objects in the network engine, formerly monolithic and sometimes laggy objects, are now broken into smaller pieces. When you get new cookies or some bandwidth is used, only the small piece that is changed now needs to be synced to the database. This is basically the same as the subscription breakup last year, but behind the scenes. It reduces some db activity and UI lag on older and network-heavy clients.
better
I have fixed more instances of 'ghost' tags, where committing certain pending tags, usually in combination with others that shared a sibling/parent implication, could still leave a 'pending' tag behind. This reasons behind it were quite complicated, but I managed to replicate the bug and fixed every instance I could find. Please let me know if you find any more instances of this behaviour.
While the display cache is working ok now, and with decent speed, some larger and more active clients will still have some ghost tags and inaccurate autocomplete counts hanging around. You won't notice or care about a count of 1,234,567 vs 1,234,588, but in some cases these will be very annoying. The only simple fixes available at the moment are the nuclear 'regen' jobs under the 'database' menu, which isn't good enough. I have planned maintenance routines for regenerating just for particular files and tags, and I want these to be easy to fire off, just from right-click menu, so if you have something wrong staring at you on some favourite files or tags, please hang in there, fixes will come.
full list
optimisations:
I fixed the new tag cache's slow tag autocomplete when in 'all known files' domain (which is usually in the manage tags dialog). what was taking about 2.5 seconds in 424 should now take about 58ms!!! for technical details, I was foolishly performing the pre-search exact match lookup (where exactly what you type appears before the full results fetch) on the new quick-text search tables, but it turns out this is unoptimised and was wasting a ton of CPU once the table got big. sorry for the trouble here--this was driving me nuts IRL. I have now fleshed out my dev machine's test client with many more millions of tag mappings so I can test these scales better in future before they go live
internal autocomplete count fetches for single tags now have less overhead, which should add up for various rapid small checks across the program, mostly for tag processing, where the client frequently consults current counts on single tags for pre-processing analysis
autocomplete count fetch requests for zero tags (lol) are also dealt with more efficiently
thanks to the new tag definition cache, the 'num tags' service info cache is now updated and regenerated more efficiently. this speeds up all tag processing a couple percent
tag update now quickly filters out redundant data before the main processing job. it is now significantly faster to process tag mappings that already exist--e.g. when a downloaded file pends tags that already exist, or repo processing gives you tags you already have, or you are filling in content gaps in reprocessing
tag processing is now more efficient when checking against membership in the display cache, which greatly speeds up processing on services with many siblings and parents. thank you to the users who have contributed profiles and other feedback regarding slower processing speeds since the display cache was added
various tag filtering and display membership tests are now shunted to the top of the mappings update routine, reducing much other overhead, especially when the mappings being added are redundant
.
tag logic fixes:
I explored the 'ghost tag' issue, where sometimes committing a pending tag still leaves a pending record. this has been happening in the new display system when two pending tags that imply the same tag through siblings or parents are committed at the same time. I fixed a previous instance of this, but more remained. I replicated the problem through a unit test, rewrote several update loops to remain in sync when needed, and have fixed potential ghost tag instances in the specific and 'all known files' domains, for 'add', 'pend', 'delete', and 'rescind pend' actions
also tested and fixed are possible instances where both a tag and its implication tag are pend-committed at the same time, not just two that imply a shared other
furthermore, in a complex counting issue, storage autocomplete count updates are no longer deferred when updating mappings--they are 'interleaved' into mappings updates so counts are always synchronised to tables. this unfortunately adds some processing overhead back in, but as a number of newer cache calculations rely on autocomplete numbers, this change improves counting and pre-processing logic
fixed a 'commit pending to current' counting bug in the new autocomplete update routine for 'all known files' domain
while display tag logic is working increasingly ok and fast, most clients will have some miscounts and ghost tags here and there. I have yet to write efficient correction maintenance routines for particular files or tags, but this is planned and will come. at the moment, you just have the nuclear 'regen' maintenance calls, which are no good for little problems
.
network object breakup:
the network session and bandwidth managers, which store your cookies and bandwidth history for all the different network contexts, are no longer monolithic objects. on updates to individual network contexts (which happens all the time during network activity), only the particular updated session or bandwidth tracker now needs to be saved to the database. this reduces CPU and UI lag on heavy clients. basically the same thing as the subscriptions breakup last year, but all behind the scenes
your existing managers will be converted on update. all existing login and bandwidth log data should be preserved
sessions will now keep delayed cookie changes that occured in the final network request before client exit
we won't go too crazy yet, but session and bandwidth data is now synced to the database every 5 minutes, instead of 10, so if the client crashes, you only lose 5 mins of login/bandwidth data
some session clearing logic is improved
the bandwidth manager no longer considers future bandwidth in tests. if your computer clock goes haywire and your client records bandwidth in the future, it shouldn't bosh you _so much_ now
.
the rest:
the 'system:number of tags' query now has greatly improved cancelability, even on gigantic result domains
fixed a bad example in the client api help that mislabeled 'request_new_permissions' as 'request_access_permissions' (issue #780)
the 'check and repair db' boot routine now runs _after_ version checks, so if you accidentally install a version behind, you now get the 'weird version m8' warning before the db goes bananas about missing tables or similar
added some methods and optimised some access in Hydrus Tag Archives
if you delete all the rules from a default bandwidth ruleset, it no longer disappears momentarily in the edit UI
updated the python mpv bindings to 0.5.2 on windows, although the underlying dll is the same. this seems to fix at least one set of dll load problems. also updated is macOS, but not Linux (yet), because it broke there, hooray
updated cloudscraper to 1.2.52 for all platforms
next week
Even if this week had good work, I got thick into logic and efficiency and couldn't find the time to do anything else. I'll catch up on regular work and finally get into my planned network updates.
0 notes
Text
Drawing inspiration from stale-while-revalidate
There's an http caching directive called 'stale-while-revalidate', which will cause your browser to use a stale cached value, while refreshing its cache asynchronously under the hood.
Making practical use of this setting within my projects proved futile, but it was enough to get me thinking; I want access to that asynchronous response when the browser receives an up-to-date response. I want to make an ajax call, and just have my callback fire twice. Once with a stale, cached response, and again when I receive up-to-date data. Lucky for me, this is actually pretty trivial to implement in basic javascript.
Use case:
I work on a SAAS product for managing inventory. The user's primary view/screen includes a sidebar with a folder structure, and each folder in the tree includes a count of the records beneath it. This sidebar is used for navigation, but the counts are also helpful to the user.
For power-users; this folder structure can become very large, and counts can reach the hundreds-of-thousands. The folders that are visible to the user may depend on the user's permission to individual records, and there are a number of other challenging behaviours to this sidebar. In short; it can take some time for the server to generate the data for this sidebar.
Natually, the best course of action would be to refactor the code to generate it faster. And I'm with you. Refactoring and designing for performance are the ideal solution long-term. But there is a time and place for quick fixes. In our case, we're bootstrapping a SAAS: fast feature implementation leads to winning and retaining customers.
One intuitive improvement we can make is to give users a more progressive loading experience. Rather than generating the entire page at once and sending it to the user as a single http response; we generate a simpler response, and then fetch 'secondary' information like the sidebar via ajax.
``` <div id='sidebar'> Loading... </div> <script> $(document).ready(function(){ $.ajax({ url: 'https://...', success: function(data){ $("#sidebar").html(data) ... // other initialization }, }) }) </script> ```
Now, the user's browsing experience feels faster, with the primary portion of their page loading more quickly, and the sidebar appearing some moments later. You do have a new user-state to consider here: the user may interact with the page between when the page loads, and when the callback is called, so you'll want to watch that your callback doesn't cause any existing on-screen content to move. You can usually address that by ensuring your placeholder 'loading' content has the same dimensions as the content which will replace it.
Implementation
So how do we implement the idea of stale-while-revalidate? There are just a couple things to do:
Add headers to the ajax response, so the browser knows it can cache it.
ex: Cache-Control: private, max-age=3600
Ensure that our callback can safely be called multiple times.
Perform the ajax call twice.
Ignoring step #2 for a moment, our new code looks like this:
``` <div id='sidebar'> Loading... </div> <script> $(document).ready(function(){ function sidebar_callback(data){ $("#sidebar").html(data) ... // other initialization } $.ajax({ url: 'https://...', success: sidebar_callback, }) $.ajax({ url: 'https://...', success: sidebar_callback, headers: {'Cache-Control': 'max-age=0'}, }) }) </script> ```
That's it! Now, when the user loads the page, their browser will make two http requests. The first will (ideally) obtain a response from the local cache, and the second request will retrieve (and cache) a fresh copy from the server. The user sees the primary portion of the page load, followed almost immediately by the appearance of the sidebar, and finally the sidebar updates in-place to reflect any new changes/values.
Unfortunately, there's step #2 to consider. Now that your callback function is running twice; you have a new state to consider: The user may interact with the site between callbacks. In the case of my folder structure; the user may have right-clicked on a folder to perform an action. My callback logic now needs to take that into careful account.
But it's worth the effort. Because once you've got a callback that you can call multiple times, we're into a new paradigm baby.
Taking it further
Once your javascript callback can safely be called multiple times; you suddenly have some new options.
You could periodically poll your ajax endpoint, and update your dom if anything has changed.
You could trigger a refresh of your ajax endpoint based on a user interaction, or an external signal from the server.
In my case, we implemented these improvements in stages - each one building off the last:
We built an MVP with strictly server-side-rendering (no fancy js frameworks).
When something got slow, we defered its generation to an ajax call (simple jquery).
Then we added this stale-while-revalidate idea.
We carried this idea over to a 'notifications' pane.
We added periodic polling for new notifications (poor man's realtime notifications).
We added some simple logic, so a new notification triggered a re-fetch of the sidebar (poor man's realtime sidebar).
We refactored the notification endpoint (server side) to use http long-polling, to make our notifications actually realtime.
Instead of asking the server for notifications every 10 seconds; the browser asks once, and the server intentionally hangs until a new notification exists. (Plus a bit of timeout/re-connection logic.)
Taking this incremental approach let us adapt and iterate quickly through the early stages of the product. For all of its functionality - very little logic was happening client-side - so we could get away with only hiring jr backend developers as we grew. Now that the SAAS has a strong user-base, a wealth of real-world usage data, a deep understanding of the user's painpoints, and can easily acquire funding: we're ready to hire a senior front-end developer to help us re-write the front-end as a single-page-app. The UI is ready for a major refactor anyway, so we can get two birds stoned at once.
Reflecting
Unless you're Google, you probably don't need to worry about performance until it rears its head. Don't polish a turd, and avoid goldplating. Instead, choose simple, boring technology to solve the problem you currently face. Keep your code DRY, and remember that every minute spent thinking about design, saves an hour of misdirected effort.
A warm embrace to you, fellow wanderer.
0 notes
Text
Aurora multi-Primary first impression
For what reason should I use a real multi-Primary setup? To be clear, not a multi-writer solution where any node can become the active writer in case of needs, as for PXC or PS-Group_replication. No, we are talking about a multi-Primary setup where I can write at the same time on multiple nodes. I want to insist on this “why?”. After having excluded the possible solutions mentioned above, both covering the famous 99,995% availability, which is 26.30 minutes downtime in a year, what is left? Disaster Recovery? Well that is something I would love to have, but to be a real DR solution we need to put several kilometers (miles for imperial) in the middle. And we know (see here and here) that aside some misleading advertising, we cannot have a tightly coupled cluster solution across geographical regions. So, what is left? I may need more HA, ok that is a valid reason. Or I may need to scale the number of writes, ok that is a valid reason as well. This means, at the end, that I am looking to a multi-Primary because: Scale writes (more nodes more writes) Consistent reads (what I write on A must be visible on B) Gives me 0 (zero) downtime, or close to that (5 nines is a maximum downtime of 864 milliseconds per day!!) Allow me to shift the writer pointer at any time from A to B and vice versa, consistently. Now, keeping myself bound to the MySQL ecosystem, my natural choice would be MySQL NDB cluster. But my (virtual) boss was at AWS re-invent and someone mentioned to him that Aurora Multi-Primary does what I was looking for. This (long) article is my voyage in discovering if that is true or … not. Given I am focused on the behaviour first, and NOT interested in absolute numbers to shock the audience with millions of QPS, I will use low level Aurora instances. And will perform tests from two EC2 in the same VPC/region of the nodes. You can find the details about the tests on GitHub here Finally I will test: Connection speed Stale read Write single node for baseline Write on both node: Scaling splitting the load by schema Scaling same schema Tests results Let us start to have some real fun. The first test is … Connection Speed The purpose of this test is to evaluate the time taken in opening a new connection and time taken to close it. The action of open/close connection can be a very expensive operation especially if applications do not use a connection pool mechanism. As we can see ProxySQL results to be the most efficient way to deal with opening connections, which was expected given the way it is designed to reuse open connections towards the backend. Different is the close connection operation in which ProxySQL seems to take a little bit longer. As global observation we can say that using ProxySQL we have more consistent behaviour. Of course this test is a simplistic one, and we are not checking the scalability (from 1 to N connections) but it is good enough to give us the initial feeling. Specific connection tests will be the focus of the next blog on Aurora MM. Stale Reads Aurora MultiPrimary use the same mechanism of the default Aurora to update the buffer pool: Using the Page Cache update, just doing both ways. This means that the Buffer Pool of Node2 is updated with the modification performed in Node1 and vice versa. To verify if an application would be really able to have consistent reads, I have run this test. This test is meant to measure if, and how many, stale reads we will have when writing on a node and reading from the other. Amazon Aurora multi Primary has 2 consistency model: As an interesting fact the result was that with the default consistency model (INSTANCE_RAW), we got 100% stale read. Given that I focused on identifying the level of the cost that exists when using the other consistency model (REGIONAL_RAW) that allows an application to have consistent reads. The results indicate an increase of the 44% in total execution time, and of the 95% (22 time slower) in write execution. It is interesting to note that the time taken is in some way predictable and consistent between the two consistency models. The graph below shows in yellow how long the application must wait to see the correct data on the reader node. While in blue is the amount of time the application waits to get back the same consistent read because it must wait for the commit on the writer. As you can see the two are more or less aligned. Given the performance cost imposed by using REGIONAL_RAW, all the other tests are done the defaut INSTANCE_RAW, unless explicitly stated. Writing tests All tests run in this section were done using sysbench-tpcc with the following settings: sysbench ./tpcc.lua --mysql-host=<> --mysql-port=3306 --mysql-user=<> --mysql-password=<> --mysql-db=tpcc --time=300 --threads=32 --report-interval=1 --tables=10 --scale=15 --mysql_table_options=" CHARSET=utf8 COLLATE=utf8_bin" --db-driver=mysql prepare sysbench /opt/tools/sysbench-tpcc/tpcc.lua --mysql-host=$mysqlhost --mysql-port=$port --mysql-user=<> --mysql-password=<> --mysql-db=tpcc --db-driver=mysql --tables=10 --scale=15 --time=$time --rand-type=zipfian --rand-zipfian-exp=0 --report-interval=1 --mysql-ignore-errors=all --histogram --report_csv=yes --stats_format=csv --db-ps-mode=disable --threads=$threads run Write Single node (Baseline) Before starting the comparative analysis, I was looking to define what was the “limit” of traffic/load for this platform. From the graph above, we can see that this setup scales up to 128 threads after that, the performance remains more or less steady. Amazon claims that we can mainly double the performance when using both nodes in write mode and use a different schema to avoid conflict. Once more remember I am not interested in the absolute numbers here, but I am expecting the same behaviour Given that our expectation is to see: Write on both nodes different schemas So AWS recommend this as the scaling solution: And I diligently follow the advice. I used 2 EC2 nodes in the same subnet of the Aurora Node, writing to a different schema (tpcc & tpcc2). Overview Let us make it short and go straight to the point. Did we get the expected scalability? Well no: We just had 26% increase, quite far to be the expected 100% Let us see what happened in detail (if not interested just skip and go to the next test). Node 1 Node 2 As you can see Node1 was (more or less) keeping up with the expectations and being close to the expected performance. But Node2 was just not keeping up, performances there were just terrible. The graphs below show what happened. While Node1 was (again more or less) scaling up to the baseline expectations (128 threads), Node2 collapsed on its knees at 16 threads. Node2 was never able to scale up. Reads Node 1 Node1 is scaling the reads as expected also if here and there we can see performance deterioration. Node 2 Node2 is not scaling Reads at all. Writes Node 1 Same as Read Node 2 Same as read Now someone may think I was making a mistake and I was writing on the same schema. I assure you I was not. Check the next test to see what happened if using the same schema. Write on both nodes same schema Overview Now, now Marco, this is unfair. You know this will cause contention. Yes I do! But nonetheless I was curious to see what was going to happen and how the platform would deal with that level of contention. My expectations were to have a lot of performance degradation and increased number of locks. About conflict I was not wrong, node2 after the test reported: +-------------+---------+-------------------------+ | table | index | PHYSICAL_CONFLICTS_HIST | +-------------+---------+-------------------------+ | district9 | PRIMARY | 3450 | | district6 | PRIMARY | 3361 | | district2 | PRIMARY | 3356 | | district8 | PRIMARY | 3271 | | district4 | PRIMARY | 3237 | | district10 | PRIMARY | 3237 | | district7 | PRIMARY | 3237 | | district3 | PRIMARY | 3217 | | district5 | PRIMARY | 3156 | | district1 | PRIMARY | 3072 | | warehouse2 | PRIMARY | 1867 | | warehouse10 | PRIMARY | 1850 | | warehouse6 | PRIMARY | 1808 | | warehouse5 | PRIMARY | 1781 | | warehouse3 | PRIMARY | 1773 | | warehouse9 | PRIMARY | 1769 | | warehouse4 | PRIMARY | 1745 | | warehouse7 | PRIMARY | 1736 | | warehouse1 | PRIMARY | 1735 | | warehouse8 | PRIMARY | 1635 | +-------------+---------+-------------------------+ Which is obviously a strong indication something was not working right. In terms of performance gain, if we compare ONLY the result with the 128 Threads : Also with the high level of conflict we still have 12% of performance gain. The problem is that in general we have the two nodes behave quite badly. If you check the graph below you can see that the level of conflict is such to prevent the nodes not only to scale but to act consistently. Node 1 Node 2 Reads In the following graphs we can see how node1 had issues and it actually crashed 3 times, during tests with 32/64/512 treads. Node2 was always up but the performances were very low. Node 1 Node 2 Writes Node 1 Node 2 Recovery from crashed Node About recovery time reading the AWS documentation and listening to presentations, I often heard that Aurora Multi Primary is a 0 downtime solution. Or other statements like: “in applications where you can't afford even brief downtime for database write operations, a multi-master cluster can help to avoid an outage when a writer instance becomes unavailable. The multi-master cluster doesn't use the failover mechanism, because it doesn't need to promote another DB instance to have read/write capability” To achieve this the suggestion I found, was to have applications pointing directly to the Nodes endpoint and not use the Cluster endpoint. In this context the solution pointing to the Nodes should be able to failover within a seconds or so, while the cluster endpoint: Personally I think that designing an architecture where the application is responsible for the connection to the database and failover is some kind of refuse from 2001. But if you feel this is the way, well go for it. What I did for testing is to use ProxySQL, as plain as possible, with nothing else then the basic monitor coming from the native monitor. I then compare the results with the tests using the Cluster endpoint. In this way I adopt the advice of pointing directly at the nodes, but I was doing things in our time. The results are below and they confirm (more or less) the data coming from Amazon. A downtime of 7 seconds is quite a long time nowadays, especially if I am targeting the 5 nines solution that I want to remember is 864 ms downtime per day. Using ProxySQL is going closer to that, still too long to be called 0 (zero) downtime. I also have fail-back issues when using the AWS cluster endpoint. Given it was not able to move the connection to the joining node seamlessly. Last but not least when using the consistency level INSTANCE_RAW, I had some data issue as well as PK conflict: FATAL: mysql_drv_query() returned error 1062 (Duplicate entry '18828082' for key 'PRIMARY') Conclusions As state the beginning of this long blog the reasons expectations to go for a multi Primary solution were: Scale writes (more nodes more writes) Gives me 0 (zero) downtime, or close to that (5 nines is a maximum downtime of 864 milliseconds per day!!) Allow me to shift the writer pointer at any time from A to B and vice versa, consistently. Honestly I feel we have completely failed the scaling point. Probably if I use the largest Aurora I will get much better absolute numbers, and it will take me more to encounter the same issues, but I will. In any case if the Multi muster solution is designed to provide that scalability, it should do that with any version. I did not have zero downtime, but I was able to failover pretty quickly with ProxySQL. Finally, unless the consistency model is REGIONAL_RAW, shifting from one node to the other is not prone to possible negative effects like stale reads. Because that I consider this requirement not satisfied in full. Given all the above, I think this solution could eventually be valid only for High Availability (close to be 5 nines), but given it comes with some limitations I do not feel comfortable in preferring it over others just for HA, at the end default Aurora is already good enough as a High available solution. references https://www.youtube.com/watch?v=p0C0jakzYuc https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-multi-master.html https://www.slideshare.net/marcotusa/improving-enterprises-ha-and-disaster-recovery-solutions-reviewed https://www.slideshare.net/marcotusa/robust-ha-solutions-with-proxysql https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-multi-master.html#aurora-multi-master-limitations http://www.tusacentral.com/joomla/index.php/mysql-blogs/225-aurora-multi-primary-first-impression
0 notes
Text
Economies of Scale - Preview
Jac felt the rain on her hat like a thousand drumming fingers. Harsh white neons cast the towering city blocks in monochrome and illuminated nothing. She recognized nothing of the street; no helpful annotations or navigational overlays flickered across the bare concrete and slick black road. No signs distinguished one window from another, no graffiti marked territory, no music droned from clubs or dens. It dawned on her that there weren’t even any alleys.
Her service weapon felt too heavy to carry. Her boots filled with rainwater.
The city would be washed away. Something looming up behind her, something with too many eyes, would make it so.
“Good morning, Detective Hobbes.”
She woke drenched in sweat, eyes wide and staring at the blank ceiling of her flat. Her morning feeds crawled from the edge of vision, lambent text on the off-white plaster. She blinked, banishing it, sitting up and reaching for water on her bedside table.
“The time is eight AM exactly,” the Municipal Information System said inside her head. “The rain is scheduled to continue for one more hour, and will resume at 2PM."
Civic Centre calls it Sally; all in the push for a ‘friendlier Califresco.’
Jac toyed with disabling the wake-up as she hauled herself out of bed and into the shower. On the other hand, she didn’t want another snide little reprimand about ‘governmental unity’ and ‘encouraging metrics.’ She wondered if uptower folks got the same pressure, or if they even noticed, so accustomed to a steady stream of ads they must be.
She switched to audio feed while getting dressed.
“...brief public disturbance in Such’s Square ably contained by Galathi Inc.’s new deterrent drones…”
Jac cast the coroner's’ intake list on the back wall, banished it again when the schedule section filled up with the word ‘incineration’. Ably contained, she frowned, buckling on her holster.
“...manual drive required between Grand and Fifth as installation of new guidance lines continues. Commuters should expect some delays and be prepared to work from your vehicles...”
“...solute disgrace, Jordan, an absolute disgrace, it really is tampering with the Lord’s design when the mind is so perfect and sacred…”
“SO HUNGRY YOU COULD JUST EAT SOMEONE?! COME ON OVER TO THE PULLED PIG; GENUINE CLONED HUMAN MEAT…”
“...expecting the Keymaker probe to arrive on Yrva next month, but scientists are still unsure if mana obeys the same laws on other planets…”
Jac killed the feed, switched to Dispatch, and stepped out of her flat into the bare beige hallway. The identical doors stretching along the walls always inspired a feeling akin to vertigo. Lewisham insisted she needed her entoptics recalibrated. The door locked behind her; the lift opened ahead of her and automatically selected parking. Her car was already running when she reached it, driver’s door popping gently ajar for her convenience.
Her father had hated that, but it’d saved Jac’s life at least twice.
On the road, she enabled automatic and fished for a cigarette. The sleek blue-black vehicle smoothly joined the Civic lane and carried her across town. The city blocks towered up to the thick, black clouds and the rain came down like bullets from heaven. The roads were narrow valleys amid concrete cliffs.
Citizens and blacklines crowded the pavements and underpasses. Drones for advertising and drones for security drifted overhead like glowing, bloated ticks. The rain was like the surface of an oil-slick, a riot of colour from the neons on every shopfront, the holoprojections and spotlights.
The smoke curling from Jac’s nostrils was whisked away by discreet vents in the ceiling. She half-watched it, devoting more attention to the words sliding down her windshield. Robbery in Vinter; owners aren’t insured for an investigation. Assault at Wilmund and Cross; victim paid on the spot for full prosecution. A raft of illegal weapon discharges which meant a turf war had turned into a massacre. Jac made a note to look into that later - someone on the force got paid for that. She may not be able to catch them for it, but at least she would know.
The Precinct’s shutters rolled up to admit her car. Jac climbed out and watched it get added to the stack below, stubbing out her smoke and tossing it on her way through the inner door.
Desk Sergeant Mahoney, rail thin with hunched shoulders and a quietly mean streak, barely glanced up as she passed. An assistant civic prosecutor hustled paperwork to the Captain's office, and a few uniforms chatted with coffee cups steaming in their hands.
“Hey, Hobbes.”
Jac paused at the door to The Pen as a skinny guy, a lurid blue-glass gem where a left eye should have been, strode toward her.
“Mornin’ sir,” she replied, and let the door close, standing back to wait for him.
“Don’t fuck about, Hobbes,” he snapped, as if the rod up his ass spoke for him, “don’t pretend you can’t feel the heat. This is a very simple question - were you at Cuveil Street last night?”
Hobbes folded her arms and pursed her lips, pulling up the station alerts. A riot, according to official sources, but she knew better.
“No, sir.”
“I know you disable your tracker when you’re off-duty, Hobbes. Don’t repeat history.”
Tensing, she held herself still and passed him her cached location data; never left her flat.
“Good, then it’s your turn,” said DCI Slater, turning on his heel.
“My turn?” Hobbes half-shouted after him, “you had to tell me that personally?” she added, but he was already boarding the lift at the end of the hall.
Jac swore under her breath and entered The Pen, where a dozen good detectives were slowly turning into comfortable desk jockeys who would later turn into the kind of pricks who make superintendent. She mumbled hellos on her way through the dull beige cubicles to her desk. By the time she sat down, her neural spike had wirelessly booted her terminal and logged in, at which point the actual device became largely irrelevant. Jac felt better about a physical keyboard for reports than letting the software in her head sneakily edit her entries for better optics.
Immediately all her casework was pushed offscreen and out of her feed by a pulsing orange box marked PRIORITY ASSIGNMENT.
Gang violence at Cuveil Street just after the nightly storm started. At least a three dead according to the census data, and that was already entered into the report. Jac smiled, brittle and mirthless; the actual bodycount was likely higher. It would take an hour to resolve, and a lifetime to forget.
She rolled her chair into the aisle and called over the nearest cube wall. “Campbell, are you busy?”
Campbell looked up with the expression of someone relieved not to have been caught browsing skin at the office.
“Will I regret saying no?” They replied, bioplastic eyebrows cycling from businesslike black to playful silver. Campbell claimed that they were a necessary tool for communication, but Jac always suspected that was one implant the willowy detective chose out of vanity.
“Strike broken on Cuveil last night,” she replied, lips curling bitterly, “I’m the rubberstamp and I’d like some company.”
“This is a favour, then…”
“If returning it involves the phrase ‘plus one’, save it for lunch,” Jac said, rolling her eyes and enabling her remote link as she stood to put her coat back on.
“Deal. I’m driving,” Campbell said, and followed her to the carpool.
The problem with mass-produced unmarked cars is that, sooner or later, people know what a silver-gray sedan is doing in the neighbourhood. Jac busied herself with official census data for the blocks around Cuveil while Campbell hummed along to their tailored playlist. Distinctly lower class area, right near the slums, with a registered civilian population in the low hundreds. Which meant at least twice that in blacklines, the unregistered criminal class; no rights, no records, no civic services. She ran a quick scan of the citizens in the area, rifling through their lives like a dispassionate god; an affair in that apartment, chronic anxiety there, the thousand small moments those people liked to believe were private. One office clerk’s behavioural record was flagged with moderate-high suicide likelihood, but the police software precluded any intervention: she wasn’t insured for emergency services to care.
The rest of the Redlines were safe and well-behaved, holed up in their apartments or away at work to avoid the inevitable police investigation at the intersection.
“This was WavTec again, right?” Campbell asked aloud, though they both knew the answer.
“Yeah,” Jac replied, “I mean, probably. They’ve got a history of employee mistreatment and they’re right around the corner.”
“Ah, ah, ah,” Campbell wagged a finger, letting the autopilot compensate for inattention, “You mean ‘minor persistent morale deficit ‘“
Jac snorted, “Employee Gratitude Shortfall.”
“That was last year’s rhetoric.”
“Hm,” Jac grunted, staring out the window, “flagged as a gang skirmish, so I guess someone had a legitimate grievance this time around.”
“Or enough money for a lawyer.”
“You have any informants locally?”
“Ha,” Campbell exclaimed, “you actually want to treat this like a real investigation?”
“I want,” Jac smirked, looking over at them, “to make Slater work overtime.”
Campbell shook their head, “you’re gonna find contraband in your desk one of these days.”
“Yeah, yeah, runs in the family,” Jac replied, looking away with a dismissive wave of the hand.
“Sorry,” Campbell said, and went back to manual drive.
“The dead are buried, Campbell, I wouldn’t get bunched up about it.”
The car rolled smoothly to a halt a few inches from the police tape, beyond which a pair of uniformed officers slouched against the wall of an apartment block with half-eaten burgers in hand. A pair of vans waited on the opposite of the intersection; competing sanitation companies bidding for the cleanup. The two detectives climbed out of the vehicle, Campbell pressing their lips into a thin line as they approached the uniforms.
“Really, guys?”
The taller of the two took a bite of his burger, the shorter shrugged; “Already been ‘round with the scent control stuff, so if you just don’t look you can keep your lunch down.”
“This is lunch?” Campbell went on, “it’s not even midday.”
“Oooh,” the shorter cop turned to her partner, “you hear that, Dave? Detective I-Get-A-Regular-Shift think we’re slacking off.”
“Must be nice to to start work after sunrise,” the taller agreed, talking around a mouthful of artificial beef.and Pretty Bready™ bun.
“You wanna talk to our manager, or is that just your haircut?” the short cop continued.
Jac ignored them and crouched under the tape, walking a slow circuit of the intersection. The smell was gone, but the bodies were pale and rigid, faces mostly frozen in screams. The rain had washed most of the blood away, left the corpses soaking, their open mouths full of water.
She made notes as she went, adding them to her internal memory.
Three dead citizens, Redlines still broadcasting flat vitals.
Thirty other bodies, strewn across the wide intersection, dressed in a mix of cheap office attire and outsider fashions.
She crouched beside the body of a boy who can't have been much past eighteen, a neat hole from a beamcaster in his temple and dyed hair wilting out of a styled fan. A fresh tattoo resembling a Seizers Legion tag stood out on his arm; Jac pulled on a glove and ran her thumb over it. It smeared a little, would’ve bled if he’d still been alive. Applied post mortem, like the shiny new gun clenched in his hand.
Knowing what to expect, she checked it for a chip and found none, nor any ammunition. Anything but the manufacturer would be scrubbed from the body of the weapon.
Jac stood and put her hands on her hips, knitting her brows as she looked around at the corpses again. A few more identical weapons wrapped in bloodless fingers, corpses lying atop each other like they tripped over the fallen before joining them. Her frown deepened and she took a few steps forward, stepped over a pair of bodies, two steps more, pausing at a stack of three corpses.
Campbell strolled over to join her, looking queasy.
“Ready to stamp this and go?”
Jac shook her head, “not just yet - can you pull any drone recordings?”
“Sure. gizza minute or two,” Campbell said, tilting their head, “you hoping to find something specific?”
“Maybe,” she replied, squinting up to where the towering blocks disappeared into the tamed clouds. Whether you loved or hated this, feel free to comment. This will proceed based on demand - check out the whichonenext tag to see the other contender for my next webserial.
#whichonenext#fantasy#Crucible: Dismal Streets#sci-fantasy#cybernetics#magitech#fiction#detective story#neonoir
2 notes
·
View notes
Text
Toe Poke Daily: Messi accepts Ronaldo’s dinner invitation

Cristiano Ronaldo notched his 10th 4-goal game for both club and country in Portugal’s 5-1 win over Lithuania.

The Serie Awesome podcast crew provide their grades on Juve’s summer dealings, where questions arise over keeping “dead weight” players.

As Barcelona’s stuttering start continued with a 2-2 draw at Osasuna, Steve Nicol is still baffled they’re struggling even without Lionel Messi.
The Toe Poke Daily is here every day to bring you all the weirdest stories, quirkiest viral content and top trolling that the internet has to offer, all in one place.
Jump to: Barcelona’s new kit inspired by Ronaldo — no, not that one | Kevin-Prince Boateng admits to buying three cars in one day

Lionel Messi and Cristiano Ronaldo could be sharing a table for two soon.
Barcelona star Lionel Messi has accepted an invitation to dinner from his great rival, Juventus forward Cristiano Ronaldo.
While the pair were sat next to each other in the front row at last month’s UEFA Champions League draw in Monaco, Ronaldo revealed that, despite sharing the stage at many gala events for over a decade, they had never broken bread together.
“We have a good relationship, we have not had a dinner together yet, but I hope in the future,” Ronaldo said, to much applause from the star-studded audience.
youtube
Well, it could happen. In a rare interview with Sport this week, Messi was pressed on his relationship with Ronaldo after his eternal rival extended an olive branch across the divide.
“We’re not friends because we have never shared the same dressing room, but we always meet up at the galas, we speak and there is absolutely no problem between us,” the Argentina captain said. “That last one [the UEFA Champions League draw gala] was the one where we spoke the most because we spent the most time together.
“I don’t know if we will eat together because both of us have busy lives and I’m not sure if we will be able to make them coincide, but I have no problem accepting his invite to dinner.”
Nice try, Leo, but you’re not wriggling out of this social commitment that easily.
– Marcotti Why do Messi and Ronaldo fans always fight?
It makes you wonder what would be on the menu when they finally make it to their table for two — La Pulga wheat? “Siiiiuu!!” bass? Jamaican curry GOAT?
Messi was also asked by Sport if he ever gets tired of being, well, Lionel Messi. The five-time Ballon d’Or winner refuses to grumble about any aspect of his life, even when he’s being routinely approached by fans while not exactly looking his best on the early morning school run.
“No, I don’t get tired. Thankfully, I experience many strange and impressive things and that is very nice,” came the reply. “It’s true that I would like to be unnoticed. Especially when I’m with my children at school or on the streets.
“At times I take them to school at 8:30 am and I’m asked [by fans] if they can have an autograph or take a picture with me and I’m looking very sleepy. But anyway, I cannot complain about anything.”
– ESPN fantasy soccer: Sign up now! – Luck Index: Could City have won title by more? – ESPN Ultimate XI: Our dream team would win it all!
Barcelona’s new kit inspired by Ronaldo — no, not that one
Barcelona have turned to one of their former greats as inspiration for their new third kit.
– All the new 2019-20 kits for Europe’s top clubs
While his time at the Camp Nou was brief, Ronaldo scored 47 goals in 49 appearances for Barca during the 1997-98 season to secure himself a place among the club’s myriad legends.
As a nod to O Fenomeno, the Catalans have brought back the iconic teal strip that became synonymous with the World Cup-winning Brazil forward.
– Can you guess XI on Kompany’s City debut? – ‘Disrespectful!’: Stars angry at FIFA 20 ratings – Maradona’s unveiling at new club got pretty wild
Kevin-Prince Boateng admits to buying three cars in one day

Kevin-Prince Boateng — you might see him in a Lambo.
Kevin-Prince Boateng has revealed he once bought three cars in one day during his largely ill-fated stint at Tottenham early in his career. The Ghana international endured two turbulent seasons at White Hart Lane which culminated in him being sold on to Portsmouth at a loss in 2009.
Looking back, Boateng admits that his behaviour was “idiotic” but that it was a result of him being a lonely young footballer with large amounts of disposable income and lots of free time to spend it as he pleased.
“I was an idiot. I didn’t treat football as a job,” the 32-year-old told La Repubblica. “I had talent, but I trained the bare minimum, an hour on the field. I was the last to arrive and the first to leave. I’d be out with friends.
“I had money, I lived like a king. I’d never been to the gym. That changes your later career. I bought three cars in one day when I was at Tottenham: a Lamborghini, a Hummer and a Cadillac.”
Boateng — who counts Barcelona, AC Milan and Borussia Dortmund among his 11 clubs — offered a few choice words of warning for young players today who find themselves in danger of drifting off course.
The midfielder, now at Fiorentina, said: “To the youngsters, I tell them: ‘You cannot buy happiness.’ I didn’t play, I had family problems, I was out of the squad.
“I was looking for happiness in material things: a car makes you happy for a week. I bought three to be happy for three weeks.”
//due to VPPA compliance we can not send keywords through URLs implicitly in the US var countryValue = $.cookie("country"); if(!!countryValue && countryValue !== "us") {
(function() { var _fbq = window._fbq || (window._fbq = []); if (!_fbq.loaded) { var fbds = document.createElement('script'); fbds.async = true; fbds.src = "http://connect.facebook.net/en_US/fbds.js"; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(fbds, s); _fbq.loaded = true; } _fbq.push(['addPixelId', '1587432981493230']); })(); window._fbq = window._fbq || []; window._fbq.push(['track', 'PixelInitialized', {}]);
$.ajax({ url: '//pixel.mathtag.com/event/js?mt_id=694557&mt_adid=137010&v1=&v2=&v3=&s1=&s2=&s3=', dataType: 'script', cache: true }); } (function(d, s, id) {var js, fjs = d.getElementsByTagName(s)[0];if (d.getElementById(id)) return;js = d.createElement(s); js.id = id;js.src = "http://connect.facebook.net/en_GB/all.js#xfbml=1&appId=116656161708917";fjs.parentNode.insertBefore(js, fjs);}(document, "script", "facebook-jssdk")); Source link
via wordpress https://ift.tt/31d9yfs
0 notes
Photo

From crawl completion notifications to automated reporting: this post may not have Billy the Kid or Butch Cassidy, instead, here are a few of my most useful tools to combine with the SEO Spider, (just as exciting). We SEOs are extremely lucky—not just because we’re working in such an engaging and collaborative industry, but we have access to a plethora of online resources, conferences and SEO-based tools to lend a hand with almost any task you could think up. My favourite of which is, of course, the SEO Spider—after all, following Minesweeper Outlook, it’s likely the most used program on my work PC. However, a great programme can only be made even more useful when combined with a gang of other fantastic tools to enhance, compliment or adapt the already vast and growing feature set. While it isn’t quite the ragtag group from John Sturges’ 1960 cult classic, I’ve compiled the Magnificent Seven(ish) SEO tools I find useful to use in conjunction with the SEO Spider: Debugging in Chrome Developer Tools Chrome is the definitive king of browsers, and arguably one of the most installed programs on the planet. What’s more, it’s got a full suite of free developer tools built straight in—to load it up, just right-click on any page and hit inspect. Among many aspects, this is particularly handy to confirm or debunk what might be happening in your crawl versus what you see in a browser. For instance, while the Spider does check response headers during a crawl, maybe you just want to dig a bit deeper and view it as a whole? Well, just go to the Network tab, select a request and open the Headers sub-tab for all the juicy details: Perhaps you’ve loaded a crawl that’s only returning one or two results and you think JavaScript might be the issue? Well, just hit the three dots (highlighted above) in the top right corner, then click settings > debugger > disable JavaScript and refresh your page to see how it looks: Or maybe you just want to compare your nice browser-rendered HTML to that served back to the Spider? Just open the Spider and enable ‘JavaScript Rendering’ & ‘Store Rendered HTML’ in the configuration options (Configuration > Spider > Rendering/Advanced), then run your crawl. Once complete, you can view the rendered HTML in the bottom ‘View Source’ tab and compare with the rendered HTML in the ‘elements’ tab of Chrome. There are honestly far too many options in the Chrome developer toolset to list here, but it’s certainly worth getting your head around. Page Validation with a Right-Click Okay, I’m cheating a bit here as this isn’t one tool, rather a collection of several, but have you ever tried right-clicking a URL within the Spider? Well, if not, I’d recommend giving it a go—on top of some handy exports like the crawl path report and visualisations, there’s a ton of options to open that URL into several individual analysis & validation apps: Google Cache – See how Google is caching and storing your pages’ HTML. Wayback Machine – Compare URL changes over time. Other Domains on IP – See all domains registered to that IP Address. Open Robots.txt – Look at a site’s Robots. HTML Validation with W3C – Double-check all HTML is valid. PageSpeed Insights – Any areas to improve site speed? Structured Data Tester – Check all on-page structured data. Mobile-Friendly Tester – Are your pages mobile-friendly? Rich Results Tester – Is the page eligible for rich results? AMP Validator – Official AMP project validation test. User Data and Link Metrics via API Access We SEOs can’t get enough data, it’s genuinely all we crave – whether that’s from user testing, keyword tracking or session information, we want it all and we want it now! After all, creating the perfect website for bots is one thing, but ultimately the aim of almost every site is to get more users to view and convert on the domain, so we need to view it from as many angles as possible. Starting with users, there’s practically no better insight into user behaviour than the raw data provided by both Google Search Console (GSC) and Google Analytics (GA), both of which help us make informed, data-driven decisions and recommendations. What’s great about this is you can easily integrate any GA or GSC data straight into your crawl via the API Access menu so it’s front and centre when reviewing any changes to your pages. Just head on over to Configuration > API Access > [your service of choice], connect to your account, configure your settings and you’re good to go. Another crucial area in SERP rankings is the perceived authority of each page in the eyes of search engines – a major aspect of which, is (of course), links., links and more links. Any SEO will know you can’t spend more than 5 minutes at BrightonSEO before someone brings up the subject of links, it’s like the lifeblood of our industry. Whether their importance is dying out or not there’s no denying that they currently still hold much value within our perceptions of Google’s algorithm. Well, alongside the previous user data you can also use the API Access menu to connect with some of the biggest tools in the industry such as Moz, Ahrefs or Majestic, to analyse your backlink profile for every URL pulled in a crawl. For all the gory details on API Access check out the following page (scroll down for other connections): https://www.screamingfrog.co.uk/seo-spider/user-guide/configuration/ Understanding Bot Behaviour with the Log File Analyzer An often-overlooked exercise, nothing gives us quite the insight into how bots are interacting through a site than directly from the server logs. The trouble is, these files can be messy and hard to analyse on their own, which is where our very own Log File Analyzer (LFA) comes into play, (they didn’t force me to add this one in, promise!). I’ll leave @ScreamingFrog to go into all the gritty details on why this tool is so useful, but my personal favourite aspect is the ‘Import URL data’ tab on the far right. This little gem will effectively match any spreadsheet containing URL information with the bot data on those URLs. So, you can run a crawl in the Spider while connected to GA, GSC and a backlink app of your choice, pulling the respective data from each URL alongside the original crawl information. Then, export this into a spreadsheet before importing into the LFA to get a report combining metadata, session data, backlink data and bot data all in one comprehensive summary, aka the holy quadrilogy of technical SEO statistics. While the LFA is a paid tool, there’s a free version if you want to give it a go. Crawl Reporting in Google Data Studio One of my favourite reports from the Spider is the simple but useful ‘Crawl Overview’ export (Reports > Crawl Overview), and if you mix this with the scheduling feature, you’re able to create a simple crawl report every day, week, month or year. This allows you to monitor and for any drastic changes to the domain and alerting to anything which might be cause for concern between crawls. However, in its native form it’s not the easiest to compare between dates, which is where Google Sheets & Data Studio can come in to lend a hand. After a bit of setup, you can easily copy over the crawl overview into your master G-Sheet each time your scheduled crawl completes, then Data Studio will automatically update, letting you spend more time analysing changes and less time searching for them. This will require some fiddling to set up; however, at the end of this section I’ve included links to an example G-Sheet and Data Studio report that you’re welcome to copy. Essentially, you need a G-Sheet with date entries in one column and unique headings from the crawl overview report (or another) in the remaining columns: Once that’s sorted, take your crawl overview report and copy out all the data in the ‘Number of URI’ column (column B), being sure to copy from the ‘Total URI Encountered’ until the end of the column. Open your master G-Sheet and create a new date entry in column A (add this in a format of YYYYMMDD). Then in the adjacent cell, Right-click > ‘Paste special’ > ‘Paste transposed’ (Data Studio prefers this to long-form data): If done correctly with several entries of data, you should have something like this: Once the data is in a G-Sheet, uploading this to Data Studio is simple, just create a new report > add data source > connect to G-Sheets > [your master sheet] > [sheet page] and make sure all the heading entries are set as a metric (blue) while the date is set as a dimension (green), like this: You can then build out a report to display your crawl data in whatever format you like. This can include scorecards and tables for individual time periods, or trend graphs to compare crawl stats over the date range provided, (you’re very own Search Console Coverage report). Here’s an overview report I quickly put together as an example. You can obviously do something much more comprehensive than this should you wish, or perhaps take this concept and combine it with even more reports and exports from the Spider. If you’d like a copy of both my G-Sheet and Data Studio report, feel free to take them from here: Master Crawl Overview G-Sheet: https://docs.google.com/spreadsheets/d/1FnfN8VxlWrCYuo2gcSj0qJoOSbIfj7bT9ZJgr2pQcs4/edit?usp=sharing Crawl Overview Data Studio Report: https://datastudio.google.com/open/1Luv7dBnkqyRj11vLEb9lwI8LfAd0b9Bm Note: if you take a copy some of the dimension formats may change within DataStudio (breaking the graphs), so it’s worth checking the date dimension is still set to ‘Date (YYYMMDD)’ Building Functions & Strings with XPath Helper & Regex Search The Spider is capable of doing some very cool stuff with the extraction feature, a lot of which is listed in our guide to web scraping and extraction. The trouble with much of this is it will require you to build your own XPath or regex string to lift your intended information. While simply right-clicking > Copy XPath within the inspect window will usually do enough to scrape, by it’s not always going to cut it for some types of data. This is where two chrome extensions, XPath Helper & Regex- Search come in useful. Unfortunately, these won’t automatically build any strings or functions, but, if you combine them with a cheat sheet and some trial and error you can easily build one out in Chrome before copying into the Spider to bulk across all your pages. For example, say I wanted to get all the dates and author information of every article on our blog subfolder (https://www.screamingfrog.co.uk/blog/). If you simply right clicked on one of the highlighted elements in the inspect window and hit Copy > Copy XPath, you would be given something like: /html/body/div[4]/div/div[1]/div/div[1]/div/div[1]/p While this does the trick, it will only pull the single instance copied (‘16 January, 2019 by Ben Fuller’). Instead, we want all the dates and authors from the /blog subfolder. By looking at what elements the reference is sitting in we can slowly build out an XPath function directly in XPath Helper and see what it highlights in Chrome. For instance, we can see it sits in a class of ‘main-blog–posts_single-inner–text–inner clearfix’, so pop that as a function into XPath Helper: //div[@class="main-blog--posts_single-inner--text--inner clearfix"] XPath Helper will then highlight the matching results in Chrome: Close, but this is also pulling the post titles, so not quite what we’re after. It looks like the date and author names are sitting in a sub
tag so let’s add that into our function: (//div[@class="main-blog--posts_single-inner--text--inner clearfix"])/p Bingo! Stick that in the custom extraction feature of the Spider (Configuration > Custom > Extraction), upload your list of pages, and watch the results pour in! Regex Search works much in the same way: simply start writing your string, hit next and you can visually see what it’s matching as you’re going. Once you got it, whack it in the Spider, upload your URLs then sit back and relax. Notifications & Auto Mailing Exports with Zapier Zapier brings together all kinds of web apps, letting them communicate and work with one another when they might not otherwise be able to. It works by having an action in one app set as a trigger and another app set to perform an action as a result. To make things even better, it works natively with a ton of applications such as G-Suite, Dropbox, Slack, and Trello. Unfortunately, as the Spider is a desktop app, we can’t directly connect it with Zapier. However, with a bit of tinkering, we can still make use of its functionality to provide email notifications or auto mailing reports/exports to yourself and a list of predetermined contacts whenever a scheduled crawl completes. All you need is to have your machine or server set up with an auto cloud sync directory such as those on ‘Dropbox’, ‘OneDrive’ or ‘Google Backup & Sync’. Inside this directory, create a folder to save all your crawl exports & reports. In this instance, I’m using G-drive, but others should work just as well. You’ll need to set a scheduled crawl in the Spider (file > Schedule) to export any tabs, bulk exports or reports into a timestamped folder within this auto-synced directory: Log into or create an account for Zapier and make a new ‘zap’ to email yourself or a list of contacts whenever a new folder is generated within the synced directory you selected in the previous step. You’ll have to provide Zapier access to both your G-Drive & Gmail for this to work (do so at your own risk). My zap looks something like this: The above Zap will trigger when a new folder is added to /Scheduled Crawls/ in my G-Drive account. It will then send out an email from my Gmail to myself and any other contacts, notifying them and attaching a direct link to the newly added folder and Spider exports. I’d like to note here that if running a large crawl or directly saving the crawl file to G-drive, you’ll need enough storage to upload (so I’d stick to exports). You’ll also have to wait until the sync is completed from your desktop to the cloud before the zap will trigger, and it checks this action on a cycle of 15 minutes, so might not be instantaneous. Alternatively, do the same thing on IFTTT (If This Then That) but set it so a new G-drive file will ping your phone, turn your smart light a hue of lime green or just play this sound at full volume on your smart speaker. We really are living in the future now! Conclusion There you have it, the Magnificent Seven(ish) tools to try using with the SEO Spider, combined to form the deadliest gang in the west web. Hopefully, you find some of these useful, but I’d love to hear if you have any other suggestions to add to the list. The post SEO Spider Companion Tools, Aka ‘The Magnificent Seven’ appeared first on Screaming Frog.
0 notes
Text
Do filter bubbles only exist in our online lives?
Source: https://www.bbc.co.uk/
I’ll be the first one to say it - I don’t often try new things. I have a close group of friends, most of whom are the same age as me, with similar interests, backgrounds and beliefs. I often watch same kind of TV and films. I read the news, but restrict myself to content from the platforms where my parents got their news when I was growing up: my mum buys the Guardian; my dad, being a proud Liverpudlian, will read anything but The Sun. I live in Reading, in a house of six girls my age who are very similar to me.
When I go home from university, I go out with my friends and we always bump into people who we have only seen a handful of times since school or college, and although this feels like a change, these people are in fact very similar to me. They all received a similar education, they grew at the same time in the city as me. Many of our parents went to university in together at the same time, others have known each other since school.
We do have the option to go out to a different area of the city where we are likely to meet people who we wouldn’t normally socialise with: we could go to the highstreet, where we would likely meet people from out of town; or I could go across town where the more upmarket pubs, clubs and restaurants are buzzing with professionals and business people who are very different to us. However, every weekend, we choose to go to the local pubs in town where I know we will end up spending the night with the same network of people as every other weekend. They are all a similar age to me, and they all went to one of my local schools or colleges at a similar time to my sisters and I. Even when I go on holiday, I go with my friends to places where we are surrounded by people similar to us. We stay in places which look like where we live at home. We eat the same foods as we do at home, drink the same drinks, go to the same kinds of places.
It’s no surprise, then, that media networks such as Facebook, Instagram and Google have created algorithms to appeal to the way in which many of us choose to experience life offline. Of course it’s important for these platforms to be selective to a certain extent to be for their ads to be effective, otherwise its users would be overwhelmed by an information overload. Instagram shows us content depending on what material we’ve previously interacted with: on the explore page, I’m shown pictures of nail art, holiday destinations and dogs - content which I expect is pretty different to what my 14-year-old stepbrother is shown. While I have no complaints about these personalised algorithms other than the slight discomfort at the thought of my online activity being constantly monitored, I can’t help but wonder who benefits from online filter bubbles?
The EU’s recent GDPR regulations require that platforms must explain to users their use of cookies, and give them an option to opt-out.
Source: Instagram.
The targeted advertising opportunities which have been created by the use of online cookies which monitor our online behaviour and preferences are very beneficial to companies, artists and not-for-profit campaigns alike. I was speaking to my friend the other day who makes music on the side of his university degree. A couple of weeks ago, he paid Instagram to sponsor his post to people who are local and likely to enjoy listening to his type of music. I didn’t see the post on my feed, but my flatmate did - then a week ago, he received an email from BBC Radio Berkshire, to tell him that they were going to be playing his track. He’s an example of just one of many startups who are benefiting from the ready-made market which has become accessible through manipulation of filter bubbles. This makes me think that despite people’s objections, filter bubbles can be a positive thing. Advertising is, has been and will always be all around us, and I know that I would rather the platform be given to unestablished, independent and relevant artists and businesses rather than being bombarded with the information overload of adverts which are useless to me.
A lot of people worry about personalised algorithms online generating content which has a lack of diversity and variety, while others worry that online personalisation is dangerous as some users don’t even know about the unavailability of options on their personalised browsers. Many activists against these algorithms, like Eli Pariser (2011) argue that we should defy the bubble by deleting cookies, internet history and using internet settings where our behaviour isn’t cached, however I think that there are less radical ways to be more free on the internet.
It’s not like we’re trapped in these echo chambers - Web 2.0 means that participant-based websites are all around us, and everybody has an equal voice in the comment section. It’s important to remember, too, that we have autonomy over what we view on the internet. Even though some content seems to be one-sided and it may seem that you’re not being shown an accurate representation of reality, like in real life, we are always able to search and find new information which we have not previously surrounded ourselves with, should we wish to.
I don’t think the media isn’t limiting us online by generating content which we want to see, but our online experiences are just conveniently becoming more similar to the way most of choose to live in real life - surrounded by the things we know and like.
Source: https://www.bbc.co.uk/usingthebbc/cookies/how-can-i-change-my-bbc-cookie-settings/
References:
Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. New York: Penguin Press.
0 notes
Text
How can we stop being cyber idiots?
Picture copyright Getty Pictures
Picture caption Are you responsible of poor cyber-security habits?
People are sometimes the weakest hyperlink within the chain in relation to laptop safety. So how can we cease doing foolish issues that play into the fingers of cyber criminals?
While you ring IT assist, the geek on the opposite finish of the road thinks you are an fool. It is the heavy sigh and patronising tone that give it away.
In reality, they’ve an acronym for us – PEBKAC. It stands for Downside Exists Between Keyboard And Chair. That is you and me.
And earlier than you get in your excessive horse filled with indignation, ask your self: when did I final again up my knowledge? What number of on-line accounts do I exploit the identical password for? What number of occasions have I clicked on a hyperlink in an e-mail with out actually understanding who despatched it?
Porn-loving US official spreads malware to authorities community
Yearly we’re reminded how dumb we’re in relation to selecting passwords.
These vary from the clearly unhealthy “123456” and “password”, to the one marginally improved “12345678” and “admin”.
Different in style ones, in line with an inventory drawn up from these present in breaches, are “letmein”, “iloveyou”, “welcome” and “monkey”.
Picture copyright Getty Pictures
Picture caption Admit it, you have usually written down your password in a spot anybody might see it, have not you?
With passwords like these, a baby of two might in all probability break in to your account after bashing on the keyboard with a toy hammer for a number of hours.
The very fact is we’re lazy.
“Lots of people neglect their password after which simply use the non permanent password the IT division gave them,” says Thomas Pedersen from OneLogin, an identification and entry administration firm.
“The issue is that these non permanent passwords can typically final a month.”
So in a big organisation, there are doubtlessly a whole bunch of individuals utilizing the identical password.
“This makes them weak to a password scrape assault – taking the most typical passwords and attempting them on thousands and thousands of accounts,” says Mr Pedersen.
“The hackers will get successful each 5,000 to six,000 occasions.”
As soon as contained in the system, the hackers may cause havoc.
How to not be a password poodle
Use as lengthy a password as you’ll be able to address – no less than greater than eight characters
Combine higher case and decrease case characters with symbols and numbers
Attempt to not use simply guessable phrases – the names of your kids, partner, pets, favorite sports activities groups and so forth
Keep away from sharing passwords with different individuals
Use totally different passwords for various websites and companies
Use two-factor authentication
Think about using a password supervisor reminiscent of Dashlane, Sticky Password or Roboform
Main knowledge breaches have gotten nearly weekly occurrences, with Fb, Cathay Pacific, British Airways, Reddit, Wonga, and Dixons Carphone becoming a member of an extended record of company victims in current months.
Extra BBC Information cyber-security tales
Two-factor authentication – utilizing your smartphone or a separate dongle to offer an additional layer of safety on prime of your principal log-in particulars – is turning into extra frequent, particularly utilizing biometrics reminiscent of voice, fingerprint, and facial recognition.
However these are much less suited to the company setting as a result of desktops do not often include fingerprint readers or video cameras inbuilt, Mr Pedersen factors out.
We’re additionally fairly dumb in relation to clicking on hyperlinks and downloading content material we should not, says Ian Pratt, co-founder of cyber-security agency Bromium.
Loads of these hyperlinks are loaded with malware – applications designed to burrow although company safety techniques, steal knowledge and even take distant management of machines.
“Greater than 99% of [malicious links] are run-of-the-mill felony malware that aren’t focused,” he says. “That malware is attempting to unfold fairly aggressively, however they don’t use any intelligent methods.”
The straightforward stuff works.
Picture copyright Getty Pictures
Picture caption Many cyber-attacks had been profitable as a result of we clicked on one thing we should not have
“Greater than 70% of the breaches that we hear about have began on a PC with some hapless consumer clicking on one thing that lets attackers get on to the community,” says Mr Pratt.
And hard-pressed IT departments have had their lives made much more troublesome in recent times by the surge in cell phones, laptops and tablets we use for work in addition to for personal functions.
So, many giant companies are specializing in making the desktop PC idiot-proof.
Bromium’s tech works by isolating every motion that takes place on a PC – sandboxing to make use of the jargon.
“Virtually each process carried out successfully will get its personal laptop,” explains Mr Pratt. “As quickly as you end that process we successfully throw that laptop computer away and get out a brand new one.”
Which means that should you click on on a malicious hyperlink, the malware is remoted and might’t escape to contaminate the remainder of the community.
Picture copyright Getty Pictures
Picture caption Would face recognition and different biometrics enhance safety within the office?
However keeping track of what we’re doing throughout a sprawling IT community could be very onerous, says Paul Farrington, a former chief know-how officer for Barclays and now a advisor at safety agency Veracode.
Giant organisations being clueless concerning the extent and attain of their IT belongings is “quite common”, he says.
A venture Veracode carried out for one excessive avenue financial institution found 1,800 web sites the organisation had not logged.
“Their perimeter may be 50% bigger than they initially thought it was,” says Mr Farrington.
And this ignorance also can prolong to the variety of computer systems – or “endpoints”, within the jargon – sitting on a company community, says Nathan Dornbrook, founder and head of safety agency ECS.
One in all his purchasers has greater than 400,000 machines to handle, and several other different clients have comparable numbers.
“The machines include substantial quantities of knowledge and buyer knowledge, passwords to inner techniques, and all types of bits and items within the simple single sign-on functions that cache credentials domestically,” he says.
In different phrases, simply one in all these PCs is an Aladdin’s cave to a hacker.
“If one assault will get inside,” says Mr Dornbrook, “you lose the entire enterprise.”
Extra Expertise of Enterprise
So provided that we’re PEBKACs and IT departments are overloaded, automated techniques have gotten more and more mandatory, cyber-security consultants say.
For instance, ECS makes use of the Tachyon instrument from safety agency 1E to assist monitor thousands and thousands of PCs and preserve them up to date with the newest software program patches and safety updates.
“In any other case you simply haven’t got time to react,” says Mr Dornbrook.
Many different cyber-security firms are transferring from a firewall method to automated real-time site visitors monitoring, on the lookout for unusual behaviour on the community.
However it will actually assist if all of us did not behave like PEBKACs at work and casually give away the keys to the dominion.
Observe Expertise of Enterprise editor Matthew Wall on Twitter and Fb
Click on right here for extra Expertise of Enterprise options
from SpicyNBAChili.com http://spicymoviechili.spicynbachili.com/how-can-we-stop-being-cyber-idiots/
0 notes