pleasantkidnight-blog
pleasantkidnight-blog
Untitled
17 posts
Don't wanna be here? Send us removal request.
pleasantkidnight-blog · 4 years ago
Link
1. Data in transit protection
2. Asset protection and resilience
3. Separation between users
4. Governance framework
5. Operational security
6. Personnel security
7. Secure development
8. Supply chain security
9. Secure user management
10. Identity and authentication
11. External interface protection
12. Secure service administration
13. Audit information for users
14. Secure use of the service
0 notes
pleasantkidnight-blog · 4 years ago
Link
0 notes
pleasantkidnight-blog · 4 years ago
Link
0 notes
pleasantkidnight-blog · 4 years ago
Link
0 notes
pleasantkidnight-blog · 4 years ago
Link
Library contracts
0 notes
pleasantkidnight-blog · 4 years ago
Link
Google, like Apple, has gradually shifted the bulk of the security features and operations of its mobile devices into hardware and away from software as attacks against the mobile operating systems have grown more sophisticated over the years. In modern iPhones, the Secure Enclave serves many of the same functions as the Titan M chip, along with several others. The Secure Enclave is essentially a separate computer running inside the iPhone that boots on its own and has its own software that the primary iOS doesn’t can’t access. It stores encryption keys, including the key used to encrypt and decrypt the biometric identifiers used on the device, whether it’s a fingerprint or Face ID images. Not only are keys for individual apps kept in the secure enclave, they’re also created inside the processor.
“When you store a private key in the Secure Enclave, you never actually handle the key, making it difficult for the key to become compromised. Instead, you instruct the Secure Enclave to create the key, securely store it, and perform operations with it. You receive only the output of these operations, such as encrypted data or a cryptographic signature verification outcome,” Apple’s documentation says.
The shift toward hardware for sensitive security operations in mobile devices is a reflection of the relative difficulty of finding and exploiting damaging vulnerabilities in each. The software attack surface in a modern mobile device is quite large, comprising the operating system, the pre-installed apps, and all of the third-party apps users install. The hardware attack surface is generally smaller and more difficult to reach. Google’s willingness to put a $1 million bounty on the table for a full remote exploit chain affecting the Titan M chip is a clear indicator of how much confidence the company has in the processor and its security.
0 notes
pleasantkidnight-blog · 4 years ago
Link
Tumblr media
main components:
proportions of each species in the population p_i
similarity b/w species Z matrix
and an additional parameter q to control how much emphasis you wanna place on “rare species” (i.e. are you trying to measure more the # of species, or the relative balance across species)
0 notes
pleasantkidnight-blog · 4 years ago
Link
0 notes
pleasantkidnight-blog · 4 years ago
Link
Quick intro to some of the simpler LTR models... but they don’t seem to be personalized
0 notes
pleasantkidnight-blog · 4 years ago
Link
SFor each category:
Tumblr media Tumblr media
Then for page generation you could start with
Template based
Row based approach : rank according to the “score” of each row. but could end up with a lot of similar roles
Stage wise approach : rows are selected sequentially from the first, and whenever a row is selected, the next rows are recomputed to take into account its relationship to both the previous rows as well as the previous items already chosen for the page.
ML based approach: aim to create a scoring function by training a model using historical information of which homepages they have created for their members — including what they actually see, how they interacted with and what they played.
A/B testing
Tumblr media
0 notes
pleasantkidnight-blog · 5 years ago
Link
Epsilon-Greedy: This is an algorithm for continuously balancing exploration with exploitation. (In ‘greedy’ experiments, the lever with highest known payout is always pulled except when a random action is taken). A randomly chosen arm is pulled a fraction ε of the time. The other 1-ε of the time, the arm with highest known payout is pulled.
Upper Confidence Bound: This strategy is based on the Optimism in the Face of Uncertainty principle, and assumes that the unknown mean payoffs of each arm will be as high as possible, based on observable data.
Thompson Sampling (Bayesian): With this randomized probability matching strategy, the number of pulls for a given lever should match its actual probability of being the optimal lever.
Situations to use MAB in
Headlines and Short-Term Campaigns
Long-term Dynamic Changes
Targeting
Automation for scale
0 notes
pleasantkidnight-blog · 5 years ago
Link
Assume you start with
<div id="parent">        <div id="firstchild">i am a first child</div>        <p id="secondchild">i am the second child</p>        <h4>i am alive</h4>        <h1>hello world</h1>        <p>i am the last child</p>    </div>
You can create and append an element using DOM manipulation
const createEl = document.createElement('div') const innerhtml = createEl.innerHTML = 'i am a frontend developer' const parentEl = document.getElementById('parent') parentEl.appendChild(createEl) console.log(parentEl)
Tumblr media
Can do other functions like “insertBefore” or “replace” 
Dynamically add or remove styles/classes (e.g. when button is clicked)
The classList.toggle() method is typically used in most social media platforms like Twitter. It allows you to like a post with a button and unlike it with that same button whenever you want
0 notes
pleasantkidnight-blog · 5 years ago
Link
There are many kinds of lock-in. By trying to avoid traditional lock-in (e.g. vendor or cloud) you’ll still lock yourself into other things (e.g. open source software)
Many enterprises are fascinated the idea of portable multi-cloud deployments and come up with ever more elaborate and complex (and expensive) plans that'll ostensibly keep them free of cloud provider lock-in. However, most of these approaches negate the very reason you'd want to go to the cloud: low friction and the ability to use hosted services like storage or databases.
Consider along dimensions of 
Tumblr media
And for switching cost itself,
Tumblr media
Then you’ll realize that there’s a U shape curve beyond which any “investment” in avoiding lock-in makes it worse for you
Tumblr media
0 notes
pleasantkidnight-blog · 5 years ago
Link
0 notes
pleasantkidnight-blog · 5 years ago
Link
Tumblr media
In the end, the most successful approach to this problem was using MetaGradients to dynamically adapt the learning rate during training - effectively letting the system learn its own optimal learning rate schedule. 
we found that making use of a linear combination of multiple loss functions (weighted appropriately) greatly increased the ability of the model to generalise. Specifically, we formulated a multi-loss objective making use of a regularising factor on the model weights, L_2 and L_1 losses on the global traversal times, as well as individual Huber and negative-log likelihood (NLL) losses for each node in the graph. By combining these losses we were able to guide our model and avoid overfitting on the training dataset.
0 notes
pleasantkidnight-blog · 5 years ago
Link
Online transaction processing (OLTP) captures, stores, and processes data from transactions in real time. Online analytical processing (OLAP) uses complex queries to analyze aggregated historical data from OLTP systems.
0 notes
pleasantkidnight-blog · 5 years ago
Link
Trends in data infrastructure
The good (core principles to keep in future innovation)
Horizontal products We no longer need to buy a bunch of vertical-specific products to do analytics on specific things; we push data into a warehouse and can then analyze it all together in a common set of tools.
Fast The modern data stack is both fast from an iteration perspective—connecting new data and exploring it is a snap relative to 2012—and a pure query execution time perspective, as the performance breakthroughs of the MPP database now feed through the entire stack.
Unlimited Scale Using cloud infrastructure, it is now possible to trivially scale up just about as far as you could want to go. Cost now becomes the primary constraint to data processing.
Low overhead Sophisticated data infrastructures of 2012 required massive overhead investment—infrastructure engineers, data engineers, etc. The modern data stack requires virtually none of this.
United by SQL In 2012 it wasn’t at all clear what language / what API would be primarily used to unite data products, and as such integrations were spotty and few people had the skills to interface with the data. Today, all components of the modern data stack speak SQL, allowing for easy integrations and unlocking data access to a broad range of practitioners.
The bad (opps for future innovation)
Governance is immature Throwing data into a warehouse and unlocking transformation and analysis to a broad range of people unlocks potential but can also create chaos. Tooling and best-practices are needed to bring trust and context to the modern data stack.
Batch-based The entire modern data stack is built on batch-based operations: polling and job scheduling. This is great for analytics, but a transition to streaming could unlock tremendous potential for the data pipelines we’re already building…
Data doesn’t feed back into operational tools The modern data stack is a one-way pipeline today: from data sources to warehouses to some type of data analysis viewed by a human on a screen. But data is about making decisions, and decisions happen in operational tools: messaging, CRM, ecommerce… Without a connection with operational tooling, tremendous value created by these pipelines is being lost.
Bridge not yet built to data consumers Data consumers were actually more self-serve prior to the advent of the modern data stack: Excel skills are widely dispersed through the population of knowledge workers. There has not yet been an analogous interface where all knowledge workers can seamlessly interact with data in the modern data stack in a horizontal way.
Vertical analytical experiences With consolidation into a centralized data infrastructure, we’ve lost differentiated analytical experiences for specific types of data. Purpose-built experiences for analyzing web and mobile data, sales data, marketing data are critically important.
1 note · View note