#Blackbox Data Validation Testing
Explore tagged Tumblr posts
simknowsstuff · 6 months ago
Text
There is always a how if you leave out strict variables, there is always a why even if its not known or knowable, except for the most abstract of abstract universal fundementals. there is almost always something to be synthesised for models that generalise complex material, or non-axiomatic structures
can the mind be described as "id/ego/superego" from a freudian view? Yes
can the mind be described via neurological circuit modeling used by engineers/macro-neurologists? Yes
can the mind be understood as a series of blackboxes that seem to form general structures? Yes
can the mind be described as a computer that executes tasks by computer scientists? Yes
can the mind be viewed as something that is meant to minimise error as viewed by the mathematician? Yes
can the mind be viewed as something that uses different types of neurons with varying structure, NTs, and binding sites as viewed by the neuro-pharmacologist? Yes
now, with regards to vague ideas, and methodologies regarding the macrostructure of certain things (things only being put together by empirical/anecdotal data and model error minimisation), the reliances such ideas have are key for finding what is actually usable
from my observation small circuit neurologists and neurobiologists have the most experimental data that acts to observe, model, and subsequently explain the directly observed mechanisms
currently this has a basis, imagining that what they are seeing in test samples is applicable to in vivo activity. this also includes data that match the behaviour of cross referenced cells, with a reasonable probabilistic standard
i would assume that much of neurophysiology assumes that there is a baseline structure of the human brain that functions consistently –via testing of multiple samples– and different ways of measuring brain regions and their general "activity" to the behaviour of the generalised normal subject
with careful consideration of methods, their applications, their consistency of results, synthesis of a better hypothesis, and observing discrepancies, all data is useful
in short, use the scientific method and create a hypothesis for models that could apply to general structures in the real world, but also take into consideration what can be observed;
for example, the standard deviation, what is being tested, what could contradict the model/what could be an overfit (you should always look for that stuff), and what those observations generally validate
tl;dr: use the scientific method with hypnosis and study of the mind, and with most things tbh. we need to have the scientific method reintroduced into common life
0 notes
etl-testing-tools · 3 years ago
Text
Data Validation Testing
At a recent TDWI virtual summit on “Data Integration and Data Quality”, I attended a session titled “Continuous Data Validation: Five Best Practices” by Andrew Cardno.
In this session, Andrew Cardno, one of the adjunct faculty at TDWI talked about the importance of validating data from the whole to the part, which means that the metrics or total should be validated before reconciling the detailed data or drill-downs. For example, revenue totals by product type should be the same in Finance, CRM, and Reporting systems.
Attending this talk reminded me of a Data Warehouse project I worked on at one of the federal agencies. The source system was a Case Management system with a Data Warehouse for reporting. We noticed that one of the key metrics “Number of Cases by Case Type” yielded different results when queried on the source database, the data warehouse, and the reports. Such discrepancies undermine the trust in the reports and the underlying data. The reason for the mismatch can be an unwanted filter or wrong join or error during the ETL process.
When it comes to the federal agency this report is sent to congress and they have a congressional mandate to ensure that the numbers are correct. For other industries such as Healthcare and Financial, compliance requirements require the data to be consistent across multiple systems in the enterprise. It is essential to reconcile the metrics and the underlying data across various systems in the enterprise.
Andrew talks about two primary methods for performing Data Validation testing techniques to help instill trust in the data and analytics.
Glassbox Data Validation Testing
Blackbox Data Validation Testing
I will go over these Data Validation testing techniques in more detail below and explain how the Datagaps DataOps suite can help automate Data Validation testing.
0 notes
kendrixtermina · 8 years ago
Note
Rluai is the most common for INFPs, just saying.
I think I’ve already said everything that there is to be said on the topic, but I will adress this one ask because I believe it touches on a topic that I’ve only covered in passing on this blog so far. 
BIg five vs. MBTI and possible correlations. 
There is not actually that much data that correlates which results where commonly received by the same people (some forums and tumblr’s own eilamona have attempted surveys though these would be biased by tumblr’s distribution not being RL’s and the usual trappings of self-reporting)
Also, with the Big 5 having 2x2x2x2x2 = 32 categories and thus few people in every category, you would need huge sample sizes and methodical proceeding to get significant correlations. “The most” could mean anything from one percent point more than the others or “over half”; a simple tally is no statement about distribution, and even a strong distribution spike is not equivalency.
See, for example, how ISTJs correlate with enneagram. There is actually a clear distinct tedency with 90% of ISTJs being one of 3 types, but each of those (1, 6 and 5) accounts for roughly a third of those 90% so it would be idiotic to say that say, being a 6  means you must be ISTJ. What about non 6 ISTJs? What about 6s who are ISFJ?
So even if most RULAIs are INFPs, all that tells you is that tells you is that if you’re both, you’re in the majority. But to tell the probability that a RULAI is INFP or a given INFP is RULAI, you would need to know either how many of all total RULAIs are INFPs, or how many INFPs are not RULAIs. 
I’m pretty sure I met some INFPs who were distinctly “E” (mostly 4w3s and/or soc-blinds) or “C” (chiefly 9w1s) for example, though I’d be surprised to find one who claims to be SCxxN. 
It’s called “Bayes’s theorem” and one of the many examples why the world would be much better if basic logic and probability theory were taught in schools.
What more,  much of what is out there on the correlating of mbti and big 5 is people trying to find some sort of equivalency between the systems, often based on a very dichotomous (and therefore, shallow) understanding of mbti that disregards the differences between them as independent metrics. See also “16 personaliies.org” and their attempt to add the neurotism metric (-A/T) which really just mucked up their test. 
Often this is supposed top validate mbti by tying it to the much renowned and supposedly so stable big 5 system - but big 5′s supposed stability and consistency comes from being a much simpler, shallower system: It really is just a ranking of specific traits or the lack thereof on a dichotomous scale. You either are orderly and reliable (”conscientious”) or you aren’t. You can get assigned a percentage to represent stronger or weaker tendencies.
Big 5 asks you “are you X?” You tell it “yes/no/maybe” and then it gives you a profile saying you are indeeed “Yes/no/maybe” on the X scale, and that for each trait. That can be useful for some applications, like correlating those traits with lifestyle choices or opinions or screening people for very demanding jobs, but it is virtually useless for the purposes most typology is used for - such as self-development or communication. 
It ranks you on a scale, but it does not really tell you anything you didn’t know before. It simply discribes, but doesn’t postulate any internal logic or structure -  It doesn’t have explanatory or predictive power. It doesn’t elucidate your inner workings, does not tell you how to get along better with a given type. It simply measures wether you are good at five things (socializing, keeping calm, being organized, making others like you, keeping an open mind) or not. There’s no advantage to being “Egocentric”, “Unstructured” or “Non-Curious.”
Big 5 measures 5 independent metrics and the combination thereof, so “RULEI” (RUxEI supposedly most common for INTP) is would not be that different from RULAI, after all that’s 4 or 5 matching! The difference is simply that the person goes a little further in not needlessly pissing people off, especially if the preference toward “A” is only weak one. 
Meanwhile, consider INFJ vs INFP. One letter apart. Sure there are many similarities but also many fundamental differences because it’s not just one letter: It means your valued functions are completely opposite. They will share traits common to all introverts, feelers and intuitives, but differ completely where functon-specific communication and reasoning differences are concerned. 
You could label yourself as “INFx” because you’re unsure about your actual type but you can’t actually be “in-between” because unlike Big 5, MBTI is not a combination of 5 scales, but a discreet classificator. 
The MBTI and all tests based on it as well as sister/branch theories like socionics are built upon the idea of the Jungian Functions, diffent distinct types of reasoning and information processing that CG Jung believed to have identified in the human mind. The system comes with the base axiom that you can have one of 8 dominant functions, and that’s it, and you’ve got to at least humor that idea for a while to assin yourself a MBTI type, and each function comes with a set of both likely (present often) and fixed (present always) traits that will be shared between the great majority of that functions. - which is what gives mbti more predictive and explanatory power. 
Someone being “Unstructured” just tells you they’re not a great organizer; Someone being a Perceiver implies a great deal about their way of thinking and decision making, be it neutral good or bad, and if you knew if they’re SP or NP you could infer even more, not always hard predictions but certainly probalities.
Just from the definitations that both the 5 traits and the functions have by definition it figures that some combinations are more frquent than others (for the same reason that, say, an ISFJ core 8 sounds pretty unlikely) but that does not a hard equivalency make, especially since big 5 allows for twice as many possibilities. 
The idea that you can just convert mbti letters to Big 5 letters as if the letters were all there were is fallacious. 
Indeed
Some things do correlations:
R/S with I/E for obvious reasons/ pretty much by design. Intro vs extroversion is one of the most obvious differences in human personalities and hence where any metric to sort those would start.   
But this is where it stops/ where things get weird or interesting depending on your PoV. 
L/C shows a very weak correlation but is almost evenly split among T/F. 
A/E shows some  correlaton of A with F and E with T correlating with the stereotype of how Feelers are “generally nicer” but it’s not a hard 
The oddest result is that intuitives are almost always Inquisitive but Sensors can be both and are evenly split overall with individual types having their own preferences. This isn’t just split among Si/Se lines as some stereotypes might suggest, ESTPs for example are very commonly Non-curious, but again, not always.   
These traits also veer into what we might call morals so they would pose. If people were predisposed toward their morals and could not be convinced, if the were “hardwired” so to speak the whole idea of morality would be pointless, for with what authority would you “blame” someone for being close minded or a jerkif they’re just following their programing?, but it is equally pointless morality as a blackbox even though we are comming closer and closer to understanding the brain. 
There’s also this tendency of treating anything we can detect as “organic” and everything we cannot as “mental”, a Soul Of The Gaps if you will (analoous to God Of The Gaps) but we know all mental processes are in the brain somehow, (because it can be destroyed by specific brain injuries, for example) so would explaining it all mean putting it all outside a person’s responsibility?  
Hidden in there is the false assumpton that the biologically explicable is “permanent” and thats true of some parts but the strenght of nerve connections can be as temporary as the state of a computer. 
Adding the problem that people do no sufficiently differentiate between facts and their interpretation. A fact is what is real regardless of what we think about it or wether we even know it. An interpretation is what a human think is ~means~ which matters only to humans. 
Fact: The earth goes around the sun
Interpretation A: See humans? you are not special. 
Interpretation B: See humans’ You’re not that bad. We get to participate in the “Dance of the stars” 
(AThe latter was actually written by a humanist writer of Kopernikus’ own time. if the earth goes round the sun, it is not “down” (where hell is) or “up” (where god is) as many geocentric worldviews implied. “Up or down” becomes utterly meaningless with heliocentrism. )
Fact: The brains of Liberals and Conservatives show differences in scans
Interpretation A: The people are Conservatives or Liberals because of inborn characteristics
From this you could then derive corollary a) All politics is meaningless bullshit if we do not really “chose” it  or c) Some politics is wrong, so some people (the ones you agree with) are better than others
Interpretation B: Peoples show differences because they are conservatives or liberals - the brain regions is how their opinion is “stored” and the media bubbles “train” them for characteristic reactions 
Interpretation C: Some people are more suceptible to certan kinds of propaganda, we [correct opinion] must phrase our message so it reaches those who are easily misled so they don’t end up voting againt their interest. [Your opinion]  is, after all, the best for everyone. 
Of course interpretations can become invalid if they don’t account for additional facts. If they scanned children and they had those characteristics before they even know what politics is, B goes out the window - Meanwhile if you scanned people before and after their opinions changed and the corresponing brain regions changed, too,  B might increase in likelihood
Another complicated factor is that people are more likely to see something as a neutral/preferential rather than a moral issue if they think it’s inborn. 
A common anti-homophobia argument is “But it’s inborn!” which is used because it seems to convince a lot of people even though it has nothing to do with homosexual acts themselves. If we could all choose wether to screw men, women, enbies or no one at all, wouldn’t it still not be anyone else’s business if it harms no one?
by contrast, Once upon a time “orderliness/discipline” was regarded as a moral thing, hence the very word “concientiousness” but now we don’t as much and there are cultural differences (some midwestern americans see foul language as a “moral failing” (”Good christians don’t swear”) rather than simply inappropiate or rude. )
Plenty to discuss here.
But basically, Big 5 and mbti are not equivalent and work by different principles indeed attempts to treat mbti like  big 5 have probably resulted in a lot of the less reliable tests out there. 
1 note · View note
startuplogy-blog · 6 years ago
Text
Helium 10 Coupon Code & Review (The Real All-in-one Swiss Army Knife)
Tumblr media
In today's world, One of the essential criteria, Is to understand the needs of digital customers, right before anyone and even to realize the needs of consumers before them and to be able to take the lead ahead of the competition. What makes Amazon so successful in its path, Is the way the company is behaving with customers and also the sellers. A forerunner at updating its services every year, Not just renewing them, but to level up the qualities for users, adding very handy features every year, to help users, search around the website, shop with ease and the class-leading guarantee of Amazon, selling products etc. Amazon's a9 algorithm, Which processes and treats customers who tend to offer the right products to customers, buy and sell data similar to their personality and personal buying behaviors. Every vendor who starts selling in the Amazon knows that the most challenging stage of the product selling is to find the appropriate product for both competition and  customer's demand. Also, the most critical errors will happen at this level. After this sensitive stage of finding a product, it is necessary to verify the possibility of the product's sale, Monitor the daily sales volumes and to analyze the amount of keywords that provide the sell of the relevant product properly, and to accurately analyze the sales quantities of the competitors listed in the keyword search results. Successful execution of all these processes continues with the selection of the right product, confirming that the product meets the criteria for competitiveness and demand, and then proceeding with the creation of the right product design, photo, and optimized product listing page, and continues with advertising, promotions and correct product supply management. In all of the processes mentioned above, budgeting and time management can be done correctly, while the product selected in the same way can change the budget and time planning dynamically.
Tumblr media
Introducing Helium 10, The Best All In One Amazon Seller's Tool:
Helium10 has done a great job at bringing the world of Amazon sellers’ software tools, into a whole new level. A perfect friendly-designed User Interface to help users explore as simple as possible. From the process of finding the product to the process of listing and advertising, all the operations performed are sufficiently comparable to the data point comparison, The tests are carried out in a reasonably long period of time, and the results of the tests are analyzed effectively, and to make the right decisions. helium10, which includes all the tools to meet your management, analysis and optimization needs, provides extremely sharp and consistent results with its compelling infrastructure and database, saving the opportunity cost and managing the whole process effectively. Helium10 aims to provide users a clear view from other Amazon sellers' product and keyword-based status, search volumes and trends. All the data obtained in Helium10 can move quickly to the tools it contains as well as the possibility to transfer in CSV format, aiming to present all complex analysis and optimization processes in the most accurate and useful manner in the shortest time.
Black Box:
https://www.youtube.com/watch?v=PXhIO_uemJc Helps users find the most profitable Items on Amazon, which means it will save a massive amount of your time searching for a specific product. An Amazon seller can use Black Box to filter products by monthly revenue, price, review metrics and shipping size tier. A very smooth keyword search engine which is expertise for Helium10 and is buttery smooth at exporting sellers’ desired details. As of Helium10’s manager says: “Save time and energy researching every niche in existence on Amazon, to find your next perfect product to sell, Use BlackBox to get results in seconds.” Black box is similar to Jungle Scout’s web application and Viral Launch’s Product Search application that works according to the filters you specify, it will show you an extensive database of products that you want, which makes it the most comfortable product finder in the competition.
Helium10 Xray (Chrome Extension)
https://www.youtube.com/watch?v=45sNz1OFQpY A very compatible extension that analyses Amazon’s product pages with scales like ASIN grabber (which is very useful at Cerebro), Profitability calculator (by just using products’ details like scales and price to calculate its profitability), Estimated revenue, Direct access to the product listing optimisation via Scribbles, and more. All and all in a merely designed Chrome extension, Which gives you a vast possibility to analyse every single product. It is best to start the product search process after confirming that the average evaluation score will be 4 or below, which is at least 5,000 dollars per month and will still be competitive as a result of product side improvements. You can also verify the products you find here with the X-ray tool and also review the status of the product on the pages that are listed.
Cerebro
https://www.youtube.com/watch?v=6T4qVRelFmc This is the most unique tool of Helium 10, One of the most popular tools according to experts’ reviews. We all know that a high-quality product with a reasonable price will sell right, but there are always tricks that make that “good” a “perfect”. Keywords searching and keywords’ optimisation are one of the most critical skills that every Amazon seller should consider. Reverse Engineering is used in this tool to help you search into the competitors’ ASIN’s keyword volume and then get the idea to use them for higher selling amounts. Also, it squeezes important data by creating profitable PPC (“pay per click”) campaigns, Specifying Cerebro IQ Scores, Different ranking details and creates valid listings considering the ASIN you’ve entered in the search box. Cerebro is the edge for Helium10, A unique and perfect tool that’s a favourite for many of Amazon’s great sellers, And you can’t find such a tool in any other competitor. You can also take a look at the video below: https://youtu.be/8hK81VNbocw
Magnet 2
One of the most successful tools of Helium10 that makes the process much smoother. After choosing your keywords, You can enter them into its search box and see a massive list of products with similar keywords. It will show you some critical information about the product like Search Volume, Sponsored ASINs, Competing products and Magnet’s IQ score to determine which keyword ideas are more valid and usable for your listings. Then you can directly export its data to Frankenstein. Continue this thread to see what’s going to happen there. https://www.youtube.com/watch?v=Y328H04-b9M Magnet 2’s most important abilities are: Effectively evaluate your Amazon keyword ranking potential by using keyword rating and search volume data. Finding out that which Amazon keywords are best to target. Increase traffic to your Amazon FBA product listings by targeting phrases that are searched most often Generate more sales by being found more often in A9 search engine results.
Frankenstein
According to Helium10, Frankenstein is the most powerful keyword processor out there. It has an Engine that brings your keyword searching process into a whole new level. Frankenstein generates very rich listings out of keywords Then proceeds to create “cash-generating” inventory as Helium10’s team claims. You can import your keyword list which has been created by magnet2 or Cerebro, Then use Frankenstein’s filters to get your desired keyword list in less than three seconds. its exported data is really valid and accurate. You can sort your list the way you want, And save or copy them right away, Or you can export them directly to Scribbles which I will talk about it down below. Take a look at the video below: https://youtu.be/-nn_o4cWE84
Scribbles
https://www.youtube.com/watch?v=UY5yRjktELI Here we face a Listing Optimizer that manages to create an inventory which contains critical keywords and keyphrases, Depending on customers’ demand and searching volume, And directly export them to your seller central using Amazon’s seller API. Ready to use, and very accurate so that you can be sure about your listing’s validation.
Helium 10 Misspellinator (Keyword Research)
https://www.youtube.com/watch?v=f3qqEFqhbxg It's a tool that lets you create misspelt words to make you understand that your listings are a result of incorrect searches but often overlooked by users Also; it is an excellent tool for generating keywords to check that the backend is not listed in the preparation of keywords as well as in cases written incorrectly.
Helium 10 Keyword Tracker
https://www.youtube.com/watch?v=44-MMAcf8Ik One of the unique tools that lets you view ranking status on keywords both on your own products and on your competitors.
Helium 10 Hijacker Alert
https://www.youtube.com/watch?v=538MWHKHhuQ It may be quite annoying if other sellers enter the products you sell and get to the top of your ranking. Especially if you enter any hijackers in the list for sellers with multiple products, you can take action in the shortest time by giving notice of the effort you have prepared and get over the opportunity of eliminating your sales by entering other vendors to your product page.
Helium 10 Refund Genie
https://www.youtube.com/watch?v=e5Sh1asjgfk This tool takes the responsibility to locate your lost or damaged inventory that must be reimbursed by Amazon. It will generate reports and send them to Amazon and will regain your losses.
Price
You can take a look at the pricing plans down below. Also, as you can see we offer you to get an instant 50% and 10% discount by using the coupons added on the images. if you want to use %50 off for 1 month Helium10 discount use the helium 10 coupon code : BESTCOUPON50
Tumblr media
if you want to use %10 forever Helium10 discount use the helium 10 coupon code: BESTCOUPON10
Tumblr media
Conclusion
Let’s sum things up. Almost 16 useful tools, Which some of them are so unique in its class, so it definitely has an edge here, An unbeatable price range, A perfectly designed UI, Positive reviews and very responsive day to day usage, All and all are packed with Helium10 to give the vendors a delightful selling experience. According to these data, If you’re thinking about being successful in Amazon as a seller, It would be much easier by using such a perfect software. Helium10 is an excellent value for the price, Clearly a wise choice for the sellers who are thinking to create an edge to their business and competition. Read the full article
0 notes
securitymy · 8 years ago
Text
State the Art of Fuzzing and Exploitation: Usable or Useless?
Fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. Today we shared our finding for the past few years fuzzing most (vulnerable) popular browser, Internet Explorer. We have seen many (notable) security researcher performed fuzzing in a large scale with different approach and idea. It is time for us to share our thoughts and experienced. 
So Why Fuzzing? 
Reduce time consumption
Uncover the unknown
Reverse engineering is tedious work 
Method and Technology
Static - Mutation and Generation
Blackbox - Framework
Writing a (browser) fuzzer from scratch are indeed pain in the ass, including understanding of the browser technology, specification (W3C), code to write (JS, HTML, CSS, XML, etc.). There’s a number of great fuzzer being released to public. The idea and concept behind the fuzzer provides more input to write a new one, rewrite or adding a new feature that is missing from the original code. A great combination with the well known fuzzer framework such as Grinder assist much in handle the fuzzing state. Our main idea writing a fuzzer:
Collect POC, templates, as much as we can / have
Creating multiple mutations for each test case
Dictionary is our specifications here (reading / understanding MSDN, W3C, MDN)
We focused more on DOM fuzzing (the rest too but not that much)
We used radamsa too :)
We created a simple framework to help us in automate few stuff such as analyzing the crash state (e.g. PyDBG, WinappDBG), restart browser (timeout, crashed state).
And of course reverse engineer interesting stuff :)
Our fuzzer consists of:
Generator
Randomization
Auto start debugging (Access Violation Read)
Process Enumeration (We need to debug the child process not the parent)
Auto generate POC (some POC not really reliable, need a little tweak or GTFO)
Logging capability
We have found numbers of issue (some are non-security issue) in the past and yet validate it still working till this write up. We have reported a few issue to vendor, unfortunately it gets rejected. For the past few years, we have seen many security (exploit) mitigation being introduced such as MemProt (MemGC), Heap Isolation, Virtual Table Guard (vtguard) and the powerful one Control Flow Guard (CFG). We have seen a number of research paper being published bypassing the security mitigation implemented in the browser. 
Quick look at exploit mitigation that has been introduced:
Example of MemProt / MemGC implementation
6b024838 e81f000000      call    MSHTML!CEventObj::~CEventObj  6b02483d f6450801        test    byte ptr [ebp+8],1 6b024841 740d            je      MSHTML!CEventObj::`vector deleting destructor'+0x20 6b024843 8b0d8041d66b    mov     ecx,dword ptr [MSHTML!g_hProcessHeap (6bd64180)] 6b024849 8bd6            mov     edx,esi 6b02484b e8a0942300      call    MSHTML!MemoryProtection::HeapFree (6b25dcf0)
Example of VTGuard implementation
6b02380f 8b33            mov     esi,dword ptr [ebx] 6b023811 81be28030000c000306b cmp dword ptr [esi+328h],offset MSHTML!__vtguard (6b3000c0) 6b02381b 7536            jne     MSHTML!CElement::IsLinkedContentElement+0x5c (6b023853)
Example of CFG implementation
6b02418f ff156c10d76b    call    dword ptr [MSHTML!__guard_check_icall_fptr (6bd7106c)] 6b024195 ffd6            call    esi
The exploit mitigation can be found when debugging IE and we noticed some area are indeed protected in a few layer. Thus, making the vulnerability almost died (non-exploitable). However, there’s a few way in order to bypass the mitigation. We won’t discuss those stuff here. Example of how the exploit mitigation mitigate the pointer of controlled data:
6b024b8e 8bf4            mov     esi,esp 6b024b90 51              push    ecx 6b024b91 8bcf            mov     ecx,edi 6b024b93 ff156c10d76b    call    dword ptr [MSHTML!__guard_check_icall_fptr (6bd7106c)] 6b024b99 ffd7            call    edi
We can see the disassembly offset “6b024b99″ were pointing to EDI. For example if we have data that already controlled, we can send an instruction to execute shellcode. However, the offset ”6b024b93″ is making the next instruction pointed to another place (indirect pointer). Verifying the pointer jump to another place:
0:009> dds MSHTML!__guard_check_icall_fptr 6bd7106c  77162170 ntdll!LdrpValidateUserCallTarget 6bd71070  6b3002a0 MSHTML!CBlockedParentUnit::Add
Another example of CFG details can be found here.
We shared a few issue that we found in the past. Some we figure out it still worked until this write up, some are fixed and some are not (just a crash triggered during fuzzing, which we believed unreliable or memory exhausted):
Issue 1 - MSHTML!CMarkupPointer::UnEmbed
(3144.3dd4): Stack overflow - code c00000fd (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=6abd3ecc ebx=061d3000 ecx=6b307880 edx=50555555 esi=0d2734a0 edi=6b307880 eip=6b25969a esp=061d3000 ebp=061d3048 iopl=0         nv up ei pl nz ac pe nc cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010216 MSHTML!CMarkupPointer::UnEmbed+0x1a: 6b25969a ff156c10d76b    call    dword ptr [MSHTML!__guard_check_icall_fptr (6bd7106c)] ds:002b:6bd7106c={ntdll!LdrpValidateUserCallTarget (77162170)}
Issue 2 - MSHTML!CTsfTextStore::Initialize
(220c.5340): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000000 ebx=04c703cc ecx=04c9c900 edx=00000000 esi=00000000 edi=04c702e0 eip=6b9be3b4 esp=0695bd70 ebp=0695bd98 iopl=0         nv up ei pl zr na pe nc cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010246 MSHTML!CTsfTextStore::Initialize+0x77: 6b9be3b4 8b0e            mov     ecx,dword ptr [esi]  ds:002b:00000000=????????
Issue 3 - WININET!ProxyBrokerClientResolver::ComMTAKeepAliveThread
0:009> !wow64exts.sw Switched to 32bit mode 0:009:x86> kb ChildEBP RetAddr  Args to Child               0d58fd64 74382e51 00000050 00000000 00000000 ntdll_77bb0000!ZwWaitForSingleObject+0x15 0d58fd78 764c14ab 00000050 00000000 00000000 VFBASICS!AVrfpNtWaitForSingleObject+0x21 0d58fde4 74382c8b 00000050 ffffffff 00000000 KERNELBASE!WaitForSingleObjectEx+0x98 0d58fe00 74382cfb 00000050 ffffffff 00000000 VFBASICS!AVrfpWaitForSingleObjectExCommon+0xa1 0d58fe20 76511194 00000050 ffffffff 00000000 VFBASICS!AVrfpKernelbaseWaitForSingleObjectEx+0x1c 0d58fe38 76511148 00000050 ffffffff 00000000 KERNEL32!WaitForSingleObjectExImplementation+0x75 0d58fe4c 74382a54 00000050 ffffffff 096baf94 KERNEL32!WaitForSingleObject+0x12 0d58fe64 74382a9c 00000050 ffffffff 7438f418 VFBASICS!AVrfpWaitForSingleObjectCommon+0x9e 0d58fe80 76336a63 00000050 ffffffff 762df3d8 VFBASICS!AVrfpKernel32WaitForSingleObject+0x23 0d58feb0 76336b20 00000000 0c2c6fe0 743a4bf0 WININET!ProxyBrokerClientResolver::ComMTAKeepAliveThread+0xd5 0d58fec8 7438602c 096baf38 4d0d531f 00000000 WININET!ProxyBrokerClientResolver::ComMTAKeepAliveThreadStart+0x30 0d58ff00 7651337a 0c2c6fe0 0d58ff4c 77be92e2 VFBASICS!AVrfpStandardThreadFunction+0x2f 0d58ff0c 77be92e2 0c2c6fe0 4efe9fa8 00000000 KERNEL32!BaseThreadInitThunk+0xe 0d58ff4c 77be92b5 74385ffd 0c2c6fe0 ffffffff ntdll_77bb0000!__RtlUserThreadStart+0x70 0d58ff64 00000000 74385ffd 0c2c6fe0 00000000 ntdll_77bb0000!_RtlUserThreadStart+0x1b
Issue 4 - iertutil!_IsoThreadWindowsPump
ModLoad: 00000000`3a510000 00000000`3a59c000   C:\Windows\SysWOW64\uiautomationcore.dll wow64cpu!CpupSyscallStub+0x9: 00000000`73f12e09 c3              ret 0:003> kb RetAddr           : Args to Child                                                           : Call Site 00000000`73f1283e : 00000000`75780735 00000000`00000023 00000000`00000246 00000000`0af2fca8 : wow64cpu!CpupSyscallStub+0x9 00000000`73f8d286 : 00000000`0aa9fd20 00000000`73f11920 00000000`00000000 00000000`0aa9ec70 : wow64cpu!WaitForMultipleObjects32+0x3b 00000000`73f8c69e : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : wow64!RunCpuSimulation+0xa 00000000`77a44ff4 : 00000000`00000000 00000000`7efdf000 00000000`7efad000 00000000`00000000 : wow64!Wow64LdrpInitialize+0x42a 00000000`779fb78e : 00000000`0aa9f260 00000000`00000000 00000000`7efdf000 00000000`00000000 : ntdll! ?? ::FNODOBFM::`string'+0x25c64 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : ntdll!LdrInitializeThunk+0xe 0:003> !sw Switched to 32bit mode 0:003:x86> kb ChildEBP RetAddr  Args to Child               0af2fb2c 764c15f7 00000001 0af2fb7c 00000001 ntdll_77bb0000!ZwWaitForMultipleObjects+0x15 0af2fbc8 74972dab 0af2fb7c 0af2fc3c 00000000 KERNELBASE!WaitForMultipleObjectsEx+0x100 0af2fbec 74972e27 00000001 0af2fc3c 00000000 VFBASICS!AVrfpWaitForMultipleObjectsExCommon+0xa7 0af2fc14 76511a0c 00000001 0af2fc3c 00000000 VFBASICS!AVrfpKernelbaseWaitForMultipleObjectsEx+0x22 0af2fc5c 74972dab 00000001 7efde000 00000000 KERNEL32!WaitForMultipleObjectsExImplementation+0xe0 0af2fc80 74972dfc 00000001 0af2fcd0 00000000 VFBASICS!AVrfpWaitForMultipleObjectsExCommon+0xa7 0af2fca8 7578086a 00000001 0af2fcd0 00000000 VFBASICS!AVrfpKernel32WaitForMultipleObjectsEx+0x2c 0af2fcfc 759ebaf3 000000f8 00000000 ffffffff USER32!RealMsgWaitForMultipleObjectsEx+0x14d 0af2fd54 759eec26 06ee0f18 0af2fddc 6c2530d0 iertutil!_IsoThreadWindowsPump+0xed 0af2fdd0 74a33991 089f8d58 00000000 0a954fe0 iertutil!IsoManagerThreadNonzero_WindowsPump+0xa6 0af2fe08 7497602c 09930fe8 4af940b3 00000000 IEShims!NS_CreateThread::DesktopIE_ThreadProc+0x94 0af2fe40 7651337a 0a954fe0 0af2fe8c 77be92e2 VFBASICS!AVrfpStandardThreadFunction+0x2f 0af2fe4c 77be92e2 0a954fe0 49a58c4c 00000000 KERNEL32!BaseThreadInitThunk+0xe 0af2fe8c 77be92b5 74975ffd 0a954fe0 ffffffff ntdll_77bb0000!__RtlUserThreadStart+0x70 0af2fea4 00000000 74975ffd 0a954fe0 00000000 ntdll_77bb0000!_RtlUserThreadStart+0x1b
Issue 5 - SHLWAPI!StrCmpICW+0xd
STACK_TEXT:   0c1b8a9c 6b1adb68 00000000 6b1adb90 0c1b8adc SHLWAPI!StrCmpICW+0xd 0c1b8abc 6a381fd9 5096aeac 69ed0df0 0c1b8af0 IEFRAME!CShellOcx::GetIDsOfNames+0x48 0c1b8ae8 6a383e78 00000000 00000001 0c1b8b54 MSHTML!COleSite::GetDispIDFromControl+0x66 0c1b8afc 6a740e03 64ba2e90 00000000 00000001 MSHTML!COleSite::GetCustomDispID+0x48 0c1b8b2c 6a542532 64ba2e90 ffffffff 6efecde0 MSHTML!COleSiteDispatchTypeOperations::GetDispId+0x73 0c1b8b58 7445e2d6 79326ff0 6efecde0 699f24e0 MSHTML!CDispatchTypeOperations::HasOwnProperty+0x9f 0c1b8bb4 746151d3 ffffffff 553138b8 7444cf00 JSCRIPT9!Js::CustomExternalObject::HasProperty+0x10d 0c1b8bd0 745c10d3 699f24e0 ffffffff 553138b8 JSCRIPT9!Js::JavascriptOperators::OP_HasProperty+0x67 0c1b8c44 7456ccc5 59987bd4 00000008 0c1b8d04 JSCRIPT9!JavascriptDispatch::GetPropertyIdWithFlag+0x1b6 0c1b8ca8 7456cbdc 59987bd4 00000008 0c1b8d04 JSCRIPT9!JavascriptDispatch::GetPropertyIdWithFlagWithScriptEnter+0x96 0c1b8d20 21bac8dc 699e2c84 59987bd4 00000008 JSCRIPT9!JavascriptDispatch::GetDispID+0xbc 0c1b8d60 21b8350c 0c1b8f38 0c1b8e50 5d374e00 vbscript!GetDispatchDispID+0x144 0c1b8fa4 21b8526e 0c1b9130 336d1c9b 5cd36fd0 vbscript!CScriptRuntime::RunNoEH+0x1951 0c1b8ff4 21b8518b 0c1b9130 337cefe0 3ddd9fe8 vbscript!CScriptRuntime::Run+0xc3 0c1b9104 21b859bd 0c1b9130 00000000 5cd36fd0 vbscript!CScriptEntryPoint::Call+0x10b 0c1b9178 21ba9b50 337cefe0 0c1bb3a8 00000000 vbscript!CSession::Execute+0x156 0c1b920c 6a447bad 33d48fe0 00000002 00000409 vbscript!NameTbl::InvokeEx+0x37a 0c1b9250 6a39dc38 58c23f48 00002712 00000409 MSHTML!CScriptCollection::InvokeEx+0xc5 0c1bb2cc 69fd008b 0c5cde30 00002712 00000409 MSHTML!CWindow::InvokeEx+0x3a9 0c1bb300 69fd00b5 0c5cde30 00002712 00000409 MSHTML!CBase::VersionedInvokeEx+0xcf 0c1bb340 6a39de0e 0c5cde30 00002712 00000409 MSHTML!CBase::PrivateInvokeEx+0xbd 0c1bb3bc 6a12ed5b 5f08ef68 00002712 00000409 MSHTML!COmWindowProxy::InvokeEx+0x2b4 0c1bb3f0 6a48821c 5f08ef68 00002712 00000409 MSHTML!CBase::VersionedInvokeEx+0x8b 0c1bb4a0 7458dfbe 69cc3ab0 10000001 55880e40 MSHTML!CJScript9Holder::Trampoline_DispatchMethod+0x232 0c1bb4d8 7458df27 10000001 0c1bb530 1a36dcd7 JSCRIPT9!Js::JavascriptFunction::CallFunction<0>+0x69 0c1bb520 7445c956 69cc3ab0 10000001 55880e40 JSCRIPT9!Js::ExternalType::ExternalEntryThunk+0x93 0c1bb7c8 7445d229 1e01c04a 699f3120 1e01c000 JSCRIPT9!Js::InterpreterStackFrame::Process+0x1e72 0c1bb8ec 0b8b0fe9 0c1bb900 0c1bb944 7445699a JSCRIPT9!Js::InterpreterStackFrame::InterpreterThunk<1>+0x200 WARNING: Frame IP not in any known module. Following frames may be wrong. 0c1bb8f8 7445699a 699e3560 00000002 55880e40 0xb8b0fe9 0c1bb944 74457081 00000002 0c1bbadc 1a36d04f JSCRIPT9!Js::JavascriptFunction::CallFunction<1>+0x91 0c1bb9b8 74456fb6 553138b8 00000002 0c1bbadc JSCRIPT9!Js::JavascriptFunction::CallRootFunction+0xb9 0c1bba00 74456f49 0c1bba2c 00000002 0c1bbadc JSCRIPT9!ScriptSite::CallRootFunction+0x42 0c1bba4c 745fb4d4 699e3560 0c1bba7c 00000000 JSCRIPT9!ScriptSite::Execute+0xd2 0c1bbab0 6a3376ad 6efecde0 699e3560 00000002 JSCRIPT9!ScriptEngineBase::Execute+0xc8 0c1bbb6c 6a337547 699e3560 5806cfa0 19a2cfc0 MSHTML!CListenerDispatch::InvokeVar+0x15a 0c1bbb98 6a337232 5806cfa0 19a2cfc0 0c1bbd44 MSHTML!CListenerDispatch::Invoke+0x6d 0c1bbc38 6a3373a6 0c1bbd44 00000001 5806cfa0 MSHTML!CEventMgr::_InvokeListeners+0x210 0c1bbc50 6a3372c3 5806cfa0 00000000 00000001 MSHTML!CEventMgr::_InvokeListenersOnWindow+0x42 0c1bbce0 6a337000 0c1bbd44 00000000 5806cfa0 MSHTML!CEventMgr::_InvokeListeners+0x150 0c1bbe40 69f9eba9 00000000 ffffffff 00000000 MSHTML!CEventMgr::Dispatch+0x4d5 0c1bbe68 6a588900 57510fe0 ffffffff 1b892f68 MSHTML!CEventMgr::DispatchEvent+0x90 0c1bbea0 69fd18e7 73124bb0 1c170f48 69f06e70 MSHTML!COmWindowProxy::Fire_onload+0x146 0c1bbf00 69fd1537 1b3fcbd0 1c170f48 1b3fcbec MSHTML!CMarkup::OnLoadStatusDone+0x373 0c1bbf14 69fd071c 00000004 69fb05c0 0c1bc374 MSHTML!CMarkup::OnLoadStatus+0xe7 0c1bc358 69fb05d2 10000019 0c1bc3ac 69f07b33 MSHTML!CProgSink::DoUpdate+0x48b 0c1bc364 69f07b33 1c170f48 1c170f48 00000000 MSHTML!CProgSink::OnMethodCall+0x12 0c1bc3ac 69eed45e 1b2ef2f0 00000000 69eed0c0 MSHTML!GlobalWndOnMethodCall+0x17b 0c1bc400 775d62fa 000a0498 00008002 00000000 MSHTML!GlobalWndProc+0x12e 0c1bc42c 775d6d3a 69eed0c0 000a0498 00008002 USER32!InternalCallWinProc+0x23 0c1bc4a4 775d77c4 00000000 69eed0c0 000a0498 USER32!UserCallWinProcCheckWow+0x109 0c1bc504 775d788a 69eed0c0 00000000 0c1bf6e4 USER32!DispatchMessageWorker+0x3bc 0c1bc514 6b17a638 0c1bc554 06f92e48 09f3dfe0 USER32!DispatchMessageW+0xf 0c1bf6e4 6b1d5f88 0c1bf7b0 6b1d5c00 06f94ff0 IEFRAME!CTabWindow::_TabWindowThreadProc+0x464 0c1bf7a4 7735e7bc 06f92e48 0c1bf7c8 6b2456e0 IEFRAME!LCIETab_ThreadProc+0x3e7 0c1bf7bc 6ecc4b01 06f94ff0 00000000 0affafe0 iertutil!_IsoThreadProc_WrapperToReleaseScope+0x1c 0c1bf7f4 6f0f602c 0b473fe8 63130241 00000000 IEShims!NS_CreateThread::DesktopIE_ThreadProc+0x94 0c1bf82c 76cc337a 0affafe0 0c1bf878 77e192e2 VFBASICS!AVrfpStandardThreadFunction+0x2f 0c1bf838 77e192e2 0affafe0 7bf2ce2a 00000000 KERNEL32!BaseThreadInitThunk+0xe 0c1bf878 77e192b5 6f0f5ffd 0affafe0 ffffffff ntdll_77de0000!__RtlUserThreadStart+0x70 0c1bf890 00000000 6f0f5ffd 0affafe0 00000000 ntdll_77de0000!_RtlUserThreadStart+0x1b
Here we providing an example of issue found when fuzzing. The function WININET!ProxyBrokerClientResolver::ComMTAKeepAliveThreadStart leads to the root cause. Our testing found that this is only work on IE11 (as per testing). The issue seems to be patched (either internally or reported) with latest update of IE11. 
A simple JavaScript can be crafted in the way:
var start = Date.now(); Date.now() - start <= 0xFFFF;
Successful execution may leads to crash path below:
Crash analysis: 0:009> !wow64exts.sw Switched to 32bit mode 0:009:x86> kb ChildEBP RetAddr  Args to Child               0d58fd64 74382e51 00000050 00000000 00000000 ntdll_77bb0000!ZwWaitForSingleObject+0x15 0d58fd78 764c14ab 00000050 00000000 00000000 VFBASICS!AVrfpNtWaitForSingleObject+0x21 0d58fde4 74382c8b 00000050 ffffffff 00000000 KERNELBASE!WaitForSingleObjectEx+0x98 0d58fe00 74382cfb 00000050 ffffffff 00000000 VFBASICS!AVrfpWaitForSingleObjectExCommon+0xa1 0d58fe20 76511194 00000050 ffffffff 00000000 VFBASICS!AVrfpKernelbaseWaitForSingleObjectEx+0x1c 0d58fe38 76511148 00000050 ffffffff 00000000 KERNEL32!WaitForSingleObjectExImplementation+0x75 0d58fe4c 74382a54 00000050 ffffffff 096baf94 KERNEL32!WaitForSingleObject+0x12 0d58fe64 74382a9c 00000050 ffffffff 7438f418 VFBASICS!AVrfpWaitForSingleObjectCommon+0x9e 0d58fe80 76336a63 00000050 ffffffff 762df3d8 VFBASICS!AVrfpKernel32WaitForSingleObject+0x23 0d58feb0 76336b20 00000000 0c2c6fe0 743a4bf0 WININET!ProxyBrokerClientResolver::ComMTAKeepAliveThread+0xd5 0d58fec8 7438602c 096baf38 4d0d531f 00000000 WININET!ProxyBrokerClientResolver::ComMTAKeepAliveThreadStart+0x30 0d58ff00 7651337a 0c2c6fe0 0d58ff4c 77be92e2 VFBASICS!AVrfpStandardThreadFunction+0x2f 0d58ff0c 77be92e2 0c2c6fe0 4efe9fa8 00000000 KERNEL32!BaseThreadInitThunk+0xe 0d58ff4c 77be92b5 74385ffd 0c2c6fe0 ffffffff ntdll_77bb0000!__RtlUserThreadStart+0x70 0d58ff64 00000000 74385ffd 0c2c6fe0 00000000 ntdll_77bb0000!_RtlUserThreadStart+0x1b
Disassemble of crashed path:
0:009:x86> u 76336b20 WININET!ProxyBrokerClientResolver::ComMTAKeepAliveThreadStart+0x30: 76336b20 8bf0            mov     esi,eax 76336b22 85f6            test    esi,esi 76336b24 7907            jns     WININET!ProxyBrokerClientResolver::ComMTAKeepAliveThreadStart+0x3d (76336b2d) 76336b26 c745fc15040000  mov     dword ptr [ebp-4],415h 76336b2d f605f8d53b7610  test    byte ptr [WININET!g_rgWppEnabledFlagsPerLevel+0xc (763bd5f8)],10h 76336b34 7412            je      WININET!ProxyBrokerClientResolver::ComMTAKeepAliveThreadStart+0x58 (76336b48) 76336b36 8bce            mov     ecx,esi 76336b38 e8e0faeeff      call    WININET!WX_WIN32_FROM_HR (7622661d)
Browser fuzzing are getting harder and harder with a lot of competitor (you named it) doing the same thing. The other approach still can be achieved although the competition are tough! We can see the most popular vulnerabilities these days are getting harder to exploit due to the mitigation. 
Keep on fuzzing :)
I guess that’s all for today. Selamat Hari Raya :)
-n
0 notes
topicprinter · 6 years ago
Link
The concept was brewing for a while, but the pivotal moment came whilst I was waiting to get my haircut, I picked up a well-known golf magazine (it shall not be named). I was saddened by how poor the advice was, in a particular article, I decided there must be a market for high-quality, in-depth golf content.Golf Insider UK aims to build the world’s best resources for golfer wishing to get better. To date, it mainly offers written content, videos and some basic analysis tools, but I have grand plans for the future. I see the website as the hub and marketing channel for all that follows.The site started off with instructional content and a small amount of affiliate content, this covered the running costs for the first 2-3 months. The next key business goal was to grow to 25,000 sessions a month so I could apply to join MediaVine and earn advertising venue.How did you validate the idea?I produced 5 articles on Blogger to test if there was a demand for high-quality, in-depth golf content. The 4th or 5th article got shared on Reddit and received ~3,000 reads in a few hours. That was the time I decided to get my head down and build a proper website on Wordpress.Did you have any experience/expertise in the area?A lot of experience, knowledge of golf coaching and sport science, but zero experience in building websites and SEO. The Authority Hacker Podcast and content from Matt Diggity's blog helped a lot in understanding how to build a site and understanding the business models available for website to generate income. Both of these resources are two really useful for anyone wishing to build a website and business model similar to Golf Insider UK.How did you fund the project?I invested all of $150 to set up the business, buy hosting and a domain name. Based on my previous start-up experiences (covered later on) I was keen to bootstrap this project and keep it really lean. The aim was to see if this venture could become something worth pursuing by the time I finished my PhD. I also love the idea of keeping complete control of what I do and where I take the business. Besides my own time, the running costs for the business for the first 12 months were under $50/month. Discounting my own wage, I can still scale the business back to run on under $150/month if I need to.I know this flies in the face of Silicon Valley and the VC world, but my two aims are to:Build something unique, of great value to the golfing community.Maximise profitability.Because of this approach I’m not constrained like the larger golf publications and competitors to keep churning out content and blasting messages across social media. I can take different approaches and build interesting content and tools. I’m not saying this approach is perfect, but to date, I’ve been really surprised with how much I have been able to grow on such little funding and resources.Being different from your competitors doesn’t end with your branding and marketing pitch. Your usp should be leveraged across your business, from day-to-day decisions, to growth strategies.Who is your target demographic?Golfers, surprisingly. I initially thought it would be 1% of the golfing population, and predominantly advanced players, but I quickly realised the site reaches a far wider demographic. It is closely aligned to a psychographic that loves to understand how things work and also loves playing golf. It includes, beginners, golf coaches and every level of golfer in between.How do you drive visitors to your website?80% of the traffic has been from SEO, 20% via golfing forums and growing an email list. For the first 18-months I only had 10 - 15 hours a week to grow the business. For this reason I decided to focus on growing one traffic source and to do that really well. I think this focus on SEO was a key reason traffic grew so quickly.Currently I do feel too dependent on Google and their blackbox algorithm, a key aim in the next 12-months is diversify the traffic sources. Youtube and Pinterest are two I’m currently researching and planning.How long did it take you to monetize?Within a few weeks I began to see some incoming revenue, it was patchy, but really exciting. It took 8-months for the site to consistently make a few hundred dollars a month, this was when I started with display ads. That figure jumped to $1,000/month by month 13 with increased traffic and some new affiliate content, like guides for buying beginner golf clubs. Then it really took offer to mid-four figures 2-3 months later, with a combination of how-to content and more affiliate reviews.At the moment the the site is funding the end of my PhD, paying me a wage and I am still building up a sizable amount to reinvest back into the business. Last week I transitioned from 10- 15 hours a week at evenings and weekends, to dedicating 3-days a week on growing the business. I’m excited to see how far I can take it.Did you run any companies prior?I was previously a director for a start-up, building a golf coaching app. The company raised $450,000 and we built a really cool product, hired some great coaches, but there wasn’t a strong product - market fit for the paid golf coaching service. Ultimately, the business ran out of money soon after the launch date. A key learning from this venture was to be frugal with resources and having enough time to tweak and refine any products.Following this business I’ve worked with elite athletes, in and out of golf, applying sport coaching and sport science concepts; on a self-employed basis. I wouldn’t class this work as building a company, as it couldn’t scale, but the core concepts of Golf Insider UK are based on testing and tweaking concepts from these first two ventures.Golf Insider UK is based on the same concepts, but has been designed on a very different business model. Business models are something that really fascinate me. I would urge any founder who thinks they have something of value to an audience to scribble multiple models they could use to build a profitable business around their idea.It isn’t always the most obvious approach that succeeds.What motivated you to start your own business?I love the idea of building and developing things, whether it be businesses, content or improving elite athletes. I used to love playing games like Theme Park World and Sim City growing up. Start-ups are just like playing these games in the real-world.What were your family and friends first thoughts you creating your own your company?I’ve always been known as the ‘happy, weird one’. From golf coach, to lecturing at university, to starting a PhD in Biomedical Science - I’ve taken quite a strange path through life so far. I don’t think people are surprised I decided to build a site like this, but my Mum certainly doesn’t understand how it is a ‘job’. I think she thinks I’m secretly selling my organs to fund my lifestyle.Do you have any advice for someone just starting out?Set clear aims, and try to do 1-2 things really well. I feel most start-ups succeed because they get 1-2 things really right and provide value. This is far more valuable than setting off with every aspect being sound, but having no strengths.Test, re-test and tweak your product/service. Try to do it in a way that gives you a long runway to get the product/service and business model optimised.What is stopping you from being 3x the size you are now?Time - I’m delighted I’ve grown the site to reach 100,000 golfers a month and built a profitable business on 10 - 15 hours a week with minimal start-up capital. Next, I can invest more time and begin to outsource some tasks to graphic designers and videographers to up the level of content production.I could outsource a lot of the writing too and become a manager of systems, but I really enjoy producing content. The great thing about building a business is that you have control over what you build and what your days look like. I’m not trying to build the biggest business in my space, but I do want it to be known as the best.Which is your favorite article?This article on Golf Practice Routines - it is an old piece, I hate the way it looks, but it does exactly what great content marketing should do. It ranks number 1 in google for relevant queries, it gives readers great value and then up-sells my product - a Golf Performance Diary. This one piece of content has been really valuable for attracting new customers and converting them into loyal followers.What are the top apps your blog could not run without?Wordpress takes a little getting used to, but is a great content management platform. I’ve recently invested in Ahrefs, this is the swiss army knife for SEO. It saves me hours each week and gives me great data to build and grow the site.Lastly, a really nice notebook. I do like google drive, but I still scribble down ideas in a notebook whenever they come to me and plan out my key tasks every day. I re-read my notebook every 2 weeks, it is so useful to see my progress, grab missed ideas and keep on top of my workflow.My aim is to keep growing the site and the business. The next big milestone is reaching 250,000 unique golfers in one month.Are there any new features you’re working on?I’m currently planning an interactive tool for golfers to find practice games and drills. The idea is that they enter the area they want to improve (iron-play), currently ability (18 handicap) and practice aim (improve strike). Based on this data the site returns the optimum way for them to practice with videos, notes and training targets.I’ll hopefully have a draft version built by January 2020, before deciding if it is worth filming a lot of videos to fully deliver the idea.What is current revenue?The site generates mid-four figures a month, which is a pretty good for one person working 10 - 15 hours a week. I can see a pathway to increasing the monthly revenue by 2 - 3x . If I can get close to that in the next 18 months I will be happy to take a step back and consider what is next.Would you ever sell the company?I’ve had three offers in the past 3 months, but I don’t plan to sell any time soon, I aim to finish my PhD in the next 3 months, then move onto Golf Insider UK full time and see what I can achieve. I wouldn’t ever rule out selling a share or all of it, but that is a distant thought at the moment.If you enjoyed this interview, the original is here.
0 notes
coin-river-blog · 7 years ago
Link
Twitter Facebook LinkedIn
Blockchain testing and research company Whiteblock Inc. has released a damning verdict on EOS, describing it as a “distributed homogeneous database” masquerading as a blockchain. In a report titled “EOS: An Architectural, Performance and Economic Analysis,” the company dissects several aspects of the EOS protocol and comes to the conclusion that it suffers from a serious security deficiency as well as network performance that is significantly lower than what was claimed.
Extraordinary Findings
According to the report compiled by Whiteblock’s research team made up of Brent Xu, Dhruv Luthra, Zak Cole, and Nate Blakely, EOS has a number of shocking security and protocol failings that fatally compromise many of the use cases suggested for the network once dubbed the “Ethereum killer.”
Over the course of two months since its September launch, the test evaluated the EOS network’s transactional throughput against its claimed capacity. In addition, it also tested its response to adverse network conditions, how it responds to variable transaction rates and sizes, its average transaction completion time, its partition tolerance and its fault tolerance. The results are far from flattering.
In a press release about the EOS test published on November 2, Whiteblock stated bluntly:
“EOS is not a blockchain, rather a distributed homogeneous database management system, a clear distinction in that their transactions are not cryptographically validated. EOS token and RAM market is essentially a cloud service where the network provides promises for computational resources in a blackbox for users to access via credits. There is no mechanism for accountability due to the lack of transparency on what Block producers are able to create in terms of computational power.”
According to Whiteblock, the actual throughput recorded by EOS under “realistic” network conditions is substantially lower than that claimed by EOS marketing materials, and the network suffers from a basic security problem of repeated consensus failure and lack of Byzantine Fault Tolerance.
In June, CCN reported that barely a week after the launch of its mainnet, EOS became immersed in controversy after an incident with its block producers which led many to question the extent of the network’s decentralisation. Whiteblock’s findings would appear to lend credence to those fears, which could have a significant effect on the EOS price.
Delivering its verdict on the network as a whole, Whiteblock said:
“The research results prove the inaccuracies in performance claims and concluded that the foundation of the EOS system is built on a flawed model that is not truly decentralized.”
CCN Exclusive Interview With Whiteblock CTO
Following the release of the report, CCN interviewed Whiteblock CTO Zak Cole to get his exclusive comment on the implication of the report for the EOS community and the blockchain ecosystem at large.
CCN: Your research concludes that EOS transactions are not cryptographically validated, making it a distributed homogeneous database, as against a blockchain. What is the implication of this for EOS as an ecosystem? Does it significantly change the picture of what EOS promised to achieve (Ethereum killer), and should EOS investors and users be worried?
Cole: My hope is that the results of our research can help provide a healthy foundation for community discussion rather than perpetuate some sort of political war between rival factions. I believe the EOS ecosystem needs to evaluate their long term goals in order to  identify a concise roadmap that can help build the system which was initially presented. It is not productive to pit Ethereum against EOS when the two systems are drastically different: one is a decentralized peer-to-peer network backed by cryptographic proofs and the other is an optimized distributed database which functions more similarly to an Infrastructure-as-a-Service product one would find on a common cloud computing platform.
At Whiteblock, we aren’t EOS people. We aren’t Ethereum people. We’re blockchain people. The intent of our research wasn’t to prove that one is better than the other, but rather provide an objective and scientific analysis the community can reference in order to build high-performing and functional systems. The Whiteblock team will also be mentoring at the EOS Hackathon in San Francisco next week. Our only goal is to assist in the efforts of building a bridge that allows blockchain technology to transition from the realm of fringe science to a viable solution that can provide practical use and shape the decentralized world of the future. This is why we developed the Whiteblock testing framework.
The community needs development tools which can provide transparent and objective performance data to distinguish fact from marketing language and understand the function of the systems we are building. The bottom line is that EOS is not capable of providing throughput to the degree which has been implied and it won’t be able to anytime soon. The system is simply unable to perform in accordance with the messaging that has driven their multi-billion dollar campaign. There’s a lot of work to be done and I hope they’re able to deliver on what was promised. Either way, it’s been an informative experiment in distributed computing.
EOS investors and users should only be worried if they’ve speculatively gambled on profiting from the unregulated market of an emerging technology.
CCN: The research also states that the actual throughput of EOS is significantly less than was claimed. In layman’s terms, what does that means for users and dApp developers?
Cole:When determining which platform is best suited to build for building your decentralized application, developers should first evaluate their priorities. If you’d like to experiment with the capabilities of decentralized peer-to-peer transactional logic, ensure that the system is actually capable of providing the functionality required to do so. If you want something that offers a high degree of transactional throughput, what’s the problem with using an existing payment gateway like Shopify or Stripe? There’s no shame in sticking to traditional client/server architectures that actually work.
Another important thing to mention is that EOS isn’t really free of transaction fees. Instead, these costs are offset to the dApp developers themselves, and the cost of running these applications can be prohibitively expensive. This is going to create a market similar to what we already see in most software systems, like the Apple’s App Store, and users will likely end up paying a significant amount more than they anticipated. I don’t know if anyone has yet to notice the significant drop in successfully processed transactions as latency and user volume rises either, but there are more important factors at play than just throughput.
CCN: Does EOS essentially present a security risk to users, or are these shortcomings things that can be fixed?
Cole:I believe the EOS system, as it is now, presents inherent security vulnerabilities. There is no effective implementation of game theory or additional algorithmic mechanisms to ensure the block producers are behaving the way they should and there is no guarantee that the assets you store today will be available or accessible tomorrow. The entire value of the EOS consensus model is based on a token holder’s ability to vote for which blocks producer they choose to act on their behalf, but when there’s nothing stopping the block producers themselves for casting votes in their own self-interest, what’s the point? Even if there were, there are no functions, cryptographic, computational, or otherwise, which governs block producer behavior. This is glaringly apparent and doesn’t take a three month research project to understand.
That being said, these shortcomings can be fixed, but if they were, EOS would likely be no different than many other masternode systems like Dash or Syscoin.
CCN: Does the fact that the study was commissioned by ConsenSys represent a conflict of interest? [Editor’s Note: ConsenSys is an Ethereum development studio with significant investment in ETH applications]
Cole: Our research was funded by about 20 organizations in addition to ConsenSys. Funding was also provided by Bo Shen, Dan Larimer’s former partner and co-founder of Bitshares, which EOS used as the basis for much of their technology. ConsenSys funding a portion of the research initiatives has no influence on the scientific process and should really be considered a moot point. We’ve conducted the same tests on Ethereum and pointed out their flaws as well. The Ethereum community was receptive to our research and engaged us further to continue our research. We’ve worked with dozens of blockchain systems. The purpose of our tests aren’t to point out what’s good about a system. This isn’t a beauty contest. In order to build more effective and higher performing systems, we should be objective and transparent and identify weaknesses in order to optimize and account for them in the design process. If the EOS community chooses to be combative towards tests and observations of this nature, the entire ecosystem is doomed and will certainly never achieve their purported scale.
Here is a link to our research which cites several significant security and performance flaws in Ethereum.
Images from Shutterstock
Follow us on Telegram or subscribe to our newsletter here. Advertisement
0 notes
endenogatai · 7 years ago
Text
Europe eyeing bot IDs, ad transparency and blockchain to fight fakes
European Union lawmakers want online platforms to come up with their own systems to identify bot accounts.
This is as part of a voluntary Code of Practice the European Commission now wants platforms to develop and apply — by this summer — as part of a wider package of proposals it’s put out which are generally aimed at tackling the problematic spread and impact of disinformation online.
The proposals follow an EC-commissioned report last month, by its High-Level Expert Group, which recommended more transparency from online platforms to help combat the spread of false information online — and also called for urgent investment in media and information literacy education, and strategies to empower journalists and foster a diverse and sustainable news media ecosystem.
Bots, fake accounts, political ads, filter bubbles
In an announcement on Friday the Commission said it wants platforms to establish “clear marking systems and rules for bots” in order to ensure “their activities cannot be confused with human interactions”. It does not go into a greater level of detail on how that might be achieved. Clearly it’s intending platforms to have to come up with relevant methodologies.
Identifying bots is not an exact science — as academics conducting research into how information spreads online could tell you. The current tools that exist for trying to spot bots typically involve rating accounts across a range of criteria to give a score of how likely an account is to be algorithmically controlled vs human controlled. But platforms do at least have a perfect view into their own systems, whereas academics have had to rely on the variable level of access platforms are willing to give them.
Another factor here is that given the sophisticated nature of some online disinformation campaigns — the state-sponsored and heavily resourced efforts by Kremlin backed entities such as Russia’s Internet Research Agency, for example — if the focus ends up being algorithmically controlled bots vs IDing bots that might have human agents helping or controlling them, plenty of more insidious disinformation agents could easily slip through the cracks.
That said, other measures in the EC’s proposals for platforms include stepping up their existing efforts to shutter fake accounts and being able to demonstrate the “effectiveness” of such efforts — so greater transparency around how fake accounts are identified and the proportion being removed (which could help surface more sophisticated human-controlled bot activity on platforms too).
Another measure from the package: The EC says it wants to see “significantly” improved scrutiny of ad placements — with a focus on trying to reduce revenue opportunities for disinformation purveyors.
Restricting targeting options for political advertising is another component. “Ensure transparency about sponsored content relating to electoral and policy-making processes,” is one of the listed objectives on its fact sheet — and ad transparency is something Facebook has said it’s prioritizing since revelations about the extent of Kremlin disinformation on its platform during the 2016 US presidential election, with expanded tools due this summer.
The Commission also says generally that it wants platforms to provide “greater clarity about the functioning of algorithms” and enable third-party verification — though there’s no greater level of detail being provided at this point to indicate how much algorithmic accountability it’s after from platforms.
We’ve asked for more on its thinking here and will update this story with any response. It looks to be seeking to test the water to see how much of the workings of platforms’ algorithmic blackboxes can be coaxed from them voluntarily — such as via measures targeting bots and fake accounts — in an attempt to stave off formal and more fulsome regulations down the line.
Filter bubbles also appear to be informing the Commission’s thinking, as it says it wants platforms to make it easier for users to “discover and access different news sources representing alternative viewpoints” — via tools that let users customize and interact with the online experience to “facilitate content discovery and access to different news sources”.
Though another stated objective is for platforms to “improve access to trustworthy information” — so there are questions about how those two aims can be balanced, i.e. without efforts towards one undermining the other. 
On trustworthiness, the EC says it wants platforms to help users assess whether content is reliable using “indicators of the trustworthiness of content sources”, as well as by providing “easily accessible tools to report disinformation”.
In one of several steps Facebook has taken since 2016 to try to tackle the problem of fake content being spread on its platform the company experimented with putting ‘disputed’ labels or red flags on potentially untrustworthy information. However the company discontinued this in December after research suggested negative labels could entrench deeply held beliefs, rather than helping to debunk fake stories.
Instead it started showing related stories — containing content it had verified as coming from news outlets its network of fact checkers considered reputable — as an alternative way to debunk potential fakes.
The Commission’s approach looks to be aligning with Facebook’s rethought approach — with the subjective question of how to make judgements on what is (and therefore what isn’t) a trustworthy source likely being handed off to third parties, given that another strand of the code is focused on “enabling fact-checkers, researchers and public authorities to continuously monitor online disinformation”.
Since 2016 Facebook has been leaning heavily on a network of local third party ‘partner’ fact-checkers to help identify and mitigate the spread of fakes in different markets — including checkers for written content and also photos and videos, the latter in an effort to combat fake memes before they have a chance to go viral and skew perceptions.
In parallel Google has also been working with external fact checkers, such as on initiatives such as highlighting fact-checked articles in Google News and search. 
The Commission clearly approves of the companies reaching out to a wider network of third party experts. But it is also encouraging work on innovative tech-powered fixes to the complex problem of disinformation — describing AI (“subject to appropriate human oversight”) as set to play a “crucial” role for “verifying, identifying and tagging disinformation”, and pointing to blockchain as having promise for content validation.
Specifically it reckons blockchain technology could play a role by, for instance, being combined with the use of “trustworthy electronic identification, authentication and verified pseudonyms” to preserve the integrity of content and validate “information and/or its sources, enable transparency and traceability, and promote trust in news displayed on the Internet”.
It’s one of a handful of nascent technologies the executive flags as potentially useful for fighting fake news, and whose development it says it intends to support via an existing EU research funding vehicle: The Horizon 2020 Work Program.
It says it will use this program to support research activities on “tools and technologies such as artificial intelligence and blockchain that can contribute to a better online space, increasing cybersecurity and trust in online services”.
It also flags “cognitive algorithms that handle contextually-relevant information, including the accuracy and the quality of data sources” as a promising tech to “improve the relevance and reliability of search results”.
The Commission is giving platforms until July to develop and apply the Code of Practice — and is using the possibility that it could still draw up new laws if it feels the voluntary measures fail as a mechanism to encourage companies to put the sweat in.
It is also proposing a range of other measures to tackle the online disinformation issue — including:
An independent European network of fact-checkers: The Commission says this will establish “common working methods, exchange best practices, and work to achieve the broadest possible coverage of factual corrections across the EU”; and says they will be selected from the EU members of the International Fact Checking Network which it notes follows “a strict International Fact Checking NetworkCode of Principles”
A secure European online platform on disinformation to support the network of fact-checkers and relevant academic researchers with “cross-border data collection and analysis”, as well as benefitting from access to EU-wide data
Enhancing media literacy: On this it says a higher level of media literacy will “help Europeans to identify online disinformation and approach online content with a critical eye”. So it says it will encourage fact-checkers and civil society organisations to provide educational material to schools and educators, and organise a European Week of Media Literacy
Support for Member States in ensuring the resilience of elections against what it dubs “increasingly complex cyber threats” including online disinformation and cyber attacks. Stated measures here include encouraging national authorities to identify best practices for the identification, mitigation and management of risks in time for the 2019 European Parliament elections. It also notes work by a Cooperation Group, saying “Member States have started to map existing European initiatives on cybersecurity of network and information systems used for electoral processes, with the aim of developing voluntary guidance” by the end of the year.  It also says it will also organise a high-level conference with Member States on cyber-enabled threats to elections in late 2018
Promotion of voluntary online identification systems with the stated aim of improving the “traceability and identification of suppliers of information” and promoting “more trust and reliability in online interactions and in information and its sources”. This includes support for related research activities in technologies such as blockchain, as noted above. The Commission also says it will “explore the feasibility of setting up voluntary systems to allow greater accountability based on electronic identification and authentication scheme” — as a measure to tackle fake accounts. “Together with others actions aimed at improving traceability online (improving the functioning, availability and accuracy of information on IP and domain names in the WHOIS system and promoting the uptake of the IPv6 protocol), this would also contribute to limiting cyberattacks,” it adds
Support for quality and diversified information: The Commission is calling on Member States to scale up their support of quality journalism to ensure a pluralistic, diverse and sustainable media environment. The Commission says it will launch a call for proposals in 2018 for “the production and dissemination of quality news content on EU affairs through data-driven news media”
It says it will aim to co-ordinate its strategic comms policy to try to counter “false narratives about Europe” — which makes you wonder whether debunking the output of certain UK tabloid newspapers might fall under that new EC strategy — and also more broadly to tackle disinformation “within and outside the EU”.
Commenting on the proposals in a statement, the Commission’s VP for the Digital Single Market, Andrus Ansip, said: “Disinformation is not new as an instrument of political influence. New technologies, especially digital, have expanded its reach via the online environment to undermine our democracy and society. Since online trust is easy to break but difficult to rebuild, industry needs to work together with us on this issue. Online platforms have an important role to play in fighting disinformation campaigns organised by individuals and countries who aim to threaten our democracy.”
The EC’s next steps now will be bringing the relevant parties together — including platforms, the ad industry and “major advertisers” — in a forum to work on greasing cooperation and getting them to apply themselves to what are still, at this stage, voluntary measures.
“The forum’s first output should be an EU–wide Code of Practice on Disinformation to be published by July 2018, with a view to having a measurable impact by October 2018,” says the Commission. 
The first progress report will be published in December 2018. “The report will also examine the need for further action to ensure the continuous monitoring and evaluation of the outlined actions,” it warns.
And if self-regulation fails…
In a fact sheet further fleshing out its plans, the Commission states: “Should the self-regulatory approach fail, the Commission may propose further actions, including regulatory ones targeted at a few platforms.”
And for “a few” read: Mainstream social platforms — so likely the big tech players in the social digital arena: Facebook, Google, Twitter.
For potential regulatory actions tech giants only need look to Germany, where a 2017 social media hate speech law has introduced fines of up to €50M for platforms that fail to comply with valid takedown requests within 24 hours for simple cases, for an example of the kind of scary EU-wide law that could come rushing down the pipe at them if the Commission and EU states decide its necessary to legislate.
Though justice and consumer affairs commissioner, Vera Jourova, signaled in January that her preference on hate speech at least was to continue pursuing the voluntary approach — though she also said some Member State’s ministers are open to a new EU-level law should the voluntary approach fail.
In Germany the so-called NetzDG law has faced criticism for pushing platforms towards risk aversion-based censorship of online content. And the Commission is clearly keen to avoid such charges being leveled at its proposals, stressing that if regulation were to be deemed necessary “such [regulatory] actions should in any case strictly respect freedom of expression”.
Commenting on the Code of Practice proposals, a Facebook spokesperson told us: “People want accurate information on Facebook – and that’s what we want too. We have invested in heavily in fighting false news on Facebook by disrupting the economic incentives for the spread of false news, building new products and working with third-party fact checkers.”
A Twitter spokesman declined to comment on the Commission’s proposals but flagged contributions he said the company is already making to support media literacy — including an event last week at its EMEA HQ.
At the time of writing Google had not responded to a request for comment.
Last month the Commission did further tighten the screw on platforms over terrorist content specifically —  saying it wants them to get this taken down within an hour of a report as a general rule. Though it still hasn’t taken the step to cement that hour ‘rule’ into legislation, also preferring to see how much action it can voluntarily squeeze out of platforms via a self-regulation route.
  from RSSMix.com Mix ID 8204425 https://ift.tt/2vYxesQ via IFTTT
0 notes
sheminecrafts · 7 years ago
Text
Europe eyeing bot IDs, ad transparency and blockchain to fight fakes
European Union lawmakers want online platforms to come up with their own systems to identify bot accounts.
This is as part of a voluntary Code of Practice the European Commission now wants platforms to develop and apply — by this summer — as part of a wider package of proposals it’s put out which are generally aimed at tackling the problematic spread and impact of disinformation online.
The proposals follow an EC-commissioned report last month, by its High-Level Expert Group, which recommended more transparency from online platforms to help combat the spread of false information online — and also called for urgent investment in media and information literacy education, and strategies to empower journalists and foster a diverse and sustainable news media ecosystem.
Bots, fake accounts, political ads, filter bubbles
In an announcement on Friday the Commission said it wants platforms to establish “clear marking systems and rules for bots” in order to ensure “their activities cannot be confused with human interactions”. It does not go into a greater level of detail on how that might be achieved. Clearly it’s intending platforms to have to come up with relevant methodologies.
Identifying bots is not an exact science — as academics conducting research into how information spreads online could tell you. The current tools that exist for trying to spot bots typically involve rating accounts across a range of criteria to give a score of how likely an account is to be algorithmically controlled vs human controlled. But platforms do at least have a perfect view into their own systems, whereas academics have had to rely on the variable level of access platforms are willing to give them.
Another factor here is that given the sophisticated nature of some online disinformation campaigns — the state-sponsored and heavily resourced efforts by Kremlin backed entities such as Russia’s Internet Research Agency, for example — if the focus ends up being algorithmically controlled bots vs IDing bots that might have human agents helping or controlling them, plenty of more insidious disinformation agents could easily slip through the cracks.
That said, other measures in the EC’s proposals for platforms include stepping up their existing efforts to shutter fake accounts and being able to demonstrate the “effectiveness” of such efforts — so greater transparency around how fake accounts are identified and the proportion being removed (which could help surface more sophisticated human-controlled bot activity on platforms too).
Another measure from the package: The EC says it wants to see “significantly” improved scrutiny of ad placements — with a focus on trying to reduce revenue opportunities for disinformation purveyors.
Restricting targeting options for political advertising is another component. “Ensure transparency about sponsored content relating to electoral and policy-making processes,” is one of the listed objectives on its fact sheet — and ad transparency is something Facebook has said it’s prioritizing since revelations about the extent of Kremlin disinformation on its platform during the 2016 US presidential election, with expanded tools due this summer.
The Commission also says generally that it wants platforms to provide “greater clarity about the functioning of algorithms” and enable third-party verification — though there’s no greater level of detail being provided at this point to indicate how much algorithmic accountability it’s after from platforms.
We’ve asked for more on its thinking here and will update this story with any response. It looks to be seeking to test the water to see how much of the workings of platforms’ algorithmic blackboxes can be coaxed from them voluntarily — such as via measures targeting bots and fake accounts — in an attempt to stave off formal and more fulsome regulations down the line.
Filter bubbles also appear to be informing the Commission’s thinking, as it says it wants platforms to make it easier for users to “discover and access different news sources representing alternative viewpoints” — via tools that let users customize and interact with the online experience to “facilitate content discovery and access to different news sources”.
Though another stated objective is for platforms to “improve access to trustworthy information” — so there are questions about how those two aims can be balanced, i.e. without efforts towards one undermining the other. 
On trustworthiness, the EC says it wants platforms to help users assess whether content is reliable using “indicators of the trustworthiness of content sources”, as well as by providing “easily accessible tools to report disinformation”.
In one of several steps Facebook has taken since 2016 to try to tackle the problem of fake content being spread on its platform the company experimented with putting ‘disputed’ labels or red flags on potentially untrustworthy information. However the company discontinued this in December after research suggested negative labels could entrench deeply held beliefs, rather than helping to debunk fake stories.
Instead it started showing related stories — containing content it had verified as coming from news outlets its network of fact checkers considered reputable — as an alternative way to debunk potential fakes.
The Commission’s approach looks to be aligning with Facebook’s rethought approach — with the subjective question of how to make judgements on what is (and therefore what isn’t) a trustworthy source likely being handed off to third parties, given that another strand of the code is focused on “enabling fact-checkers, researchers and public authorities to continuously monitor online disinformation”.
Since 2016 Facebook has been leaning heavily on a network of local third party ‘partner’ fact-checkers to help identify and mitigate the spread of fakes in different markets — including checkers for written content and also photos and videos, the latter in an effort to combat fake memes before they have a chance to go viral and skew perceptions.
In parallel Google has also been working with external fact checkers, such as on initiatives such as highlighting fact-checked articles in Google News and search. 
The Commission clearly approves of the companies reaching out to a wider network of third party experts. But it is also encouraging work on innovative tech-powered fixes to the complex problem of disinformation — describing AI (“subject to appropriate human oversight”) as set to play a “crucial” role for “verifying, identifying and tagging disinformation”, and pointing to blockchain as having promise for content validation.
Specifically it reckons blockchain technology could play a role by, for instance, being combined with the use of “trustworthy electronic identification, authentication and verified pseudonyms” to preserve the integrity of content and validate “information and/or its sources, enable transparency and traceability, and promote trust in news displayed on the Internet”.
It’s one of a handful of nascent technologies the executive flags as potentially useful for fighting fake news, and whose development it says it intends to support via an existing EU research funding vehicle: The Horizon 2020 Work Program.
It says it will use this program to support research activities on “tools and technologies such as artificial intelligence and blockchain that can contribute to a better online space, increasing cybersecurity and trust in online services”.
It also flags “cognitive algorithms that handle contextually-relevant information, including the accuracy and the quality of data sources” as a promising tech to “improve the relevance and reliability of search results”.
The Commission is giving platforms until July to develop and apply the Code of Practice — and is using the possibility that it could still draw up new laws if it feels the voluntary measures fail as a mechanism to encourage companies to put the sweat in.
It is also proposing a range of other measures to tackle the online disinformation issue — including:
An independent European network of fact-checkers: The Commission says this will establish “common working methods, exchange best practices, and work to achieve the broadest possible coverage of factual corrections across the EU”; and says they will be selected from the EU members of the International Fact Checking Network which it notes follows “a strict International Fact Checking NetworkCode of Principles”
A secure European online platform on disinformation to support the network of fact-checkers and relevant academic researchers with “cross-border data collection and analysis”, as well as benefitting from access to EU-wide data
Enhancing media literacy: On this it says a higher level of media literacy will “help Europeans to identify online disinformation and approach online content with a critical eye”. So it says it will encourage fact-checkers and civil society organisations to provide educational material to schools and educators, and organise a European Week of Media Literacy
Support for Member States in ensuring the resilience of elections against what it dubs “increasingly complex cyber threats” including online disinformation and cyber attacks. Stated measures here include encouraging national authorities to identify best practices for the identification, mitigation and management of risks in time for the 2019 European Parliament elections. It also notes work by a Cooperation Group, saying “Member States have started to map existing European initiatives on cybersecurity of network and information systems used for electoral processes, with the aim of developing voluntary guidance” by the end of the year.  It also says it will also organise a high-level conference with Member States on cyber-enabled threats to elections in late 2018
Promotion of voluntary online identification systems with the stated aim of improving the “traceability and identification of suppliers of information” and promoting “more trust and reliability in online interactions and in information and its sources”. This includes support for related research activities in technologies such as blockchain, as noted above. The Commission also says it will “explore the feasibility of setting up voluntary systems to allow greater accountability based on electronic identification and authentication scheme” — as a measure to tackle fake accounts. “Together with others actions aimed at improving traceability online (improving the functioning, availability and accuracy of information on IP and domain names in the WHOIS system and promoting the uptake of the IPv6 protocol), this would also contribute to limiting cyberattacks,” it adds
Support for quality and diversified information: The Commission is calling on Member States to scale up their support of quality journalism to ensure a pluralistic, diverse and sustainable media environment. The Commission says it will launch a call for proposals in 2018 for “the production and dissemination of quality news content on EU affairs through data-driven news media”
It says it will aim to co-ordinate its strategic comms policy to try to counter “false narratives about Europe” — which makes you wonder whether debunking the output of certain UK tabloid newspapers might fall under that new EC strategy — and also more broadly to tackle disinformation “within and outside the EU”.
Commenting on the proposals in a statement, the Commission’s VP for the Digital Single Market, Andrus Ansip, said: “Disinformation is not new as an instrument of political influence. New technologies, especially digital, have expanded its reach via the online environment to undermine our democracy and society. Since online trust is easy to break but difficult to rebuild, industry needs to work together with us on this issue. Online platforms have an important role to play in fighting disinformation campaigns organised by individuals and countries who aim to threaten our democracy.”
The EC’s next steps now will be bringing the relevant parties together — including platforms, the ad industry and “major advertisers” — in a forum to work on greasing cooperation and getting them to apply themselves to what are still, at this stage, voluntary measures.
“The forum’s first output should be an EU–wide Code of Practice on Disinformation to be published by July 2018, with a view to having a measurable impact by October 2018,” says the Commission. 
The first progress report will be published in December 2018. “The report will also examine the need for further action to ensure the continuous monitoring and evaluation of the outlined actions,” it warns.
And if self-regulation fails…
In a fact sheet further fleshing out its plans, the Commission states: “Should the self-regulatory approach fail, the Commission may propose further actions, including regulatory ones targeted at a few platforms.”
And for “a few” read: Mainstream social platforms — so likely the big tech players in the social digital arena: Facebook, Google, Twitter.
For potential regulatory actions tech giants only need look to Germany, where a 2017 social media hate speech law has introduced fines of up to €50M for platforms that fail to comply with valid takedown requests within 24 hours for simple cases, for an example of the kind of scary EU-wide law that could come rushing down the pipe at them if the Commission and EU states decide its necessary to legislate.
Though justice and consumer affairs commissioner, Vera Jourova, signaled in January that her preference on hate speech at least was to continue pursuing the voluntary approach — though she also said some Member State’s ministers are open to a new EU-level law should the voluntary approach fail.
In Germany the so-called NetzDG law has faced criticism for pushing platforms towards risk aversion-based censorship of online content. And the Commission is clearly keen to avoid such charges being leveled at its proposals, stressing that if regulation were to be deemed necessary “such [regulatory] actions should in any case strictly respect freedom of expression”.
Commenting on the Code of Practice proposals, a Facebook spokesperson told us: “People want accurate information on Facebook – and that’s what we want too. We have invested in heavily in fighting false news on Facebook by disrupting the economic incentives for the spread of false news, building new products and working with third-party fact checkers.”
A Twitter spokesman declined to comment on the Commission’s proposals but flagged contributions he said the company is already making to support media literacy — including an event last week at its EMEA HQ.
At the time of writing Google had not responded to a request for comment.
Last month the Commission did further tighten the screw on platforms over terrorist content specifically —  saying it wants them to get this taken down within an hour of a report as a general rule. Though it still hasn’t taken the step to cement that hour ‘rule’ into legislation, also preferring to see how much action it can voluntarily squeeze out of platforms via a self-regulation route.
  from iraidajzsmmwtv https://ift.tt/2vYxesQ via IFTTT
0 notes
technicalsolutions88 · 7 years ago
Link
European Union lawmakers want online platforms to come up with their own systems to identify bot accounts.
This is as part of a voluntary Code of Practice the European Commission now wants platforms to develop and apply — by this summer — as part of a wider package of proposals it’s put out which are generally aimed at tackling the problematic spread and impact of disinformation online.
The proposals follow an EC-commissioned report last month, by its High-Level Expert Group, which recommended more transparency from online platforms to help combat the spread of false information online — and also called for urgent investment in media and information literacy education, and strategies to empower journalists and foster a diverse and sustainable news media ecosystem.
Bots, fake accounts, political ads, filter bubbles
In an announcement on Friday the Commission said it wants platforms to establish “clear marking systems and rules for bots” in order to ensure “their activities cannot be confused with human interactions”. It does not go into a greater level of detail on how that might be achieved. Clearly it’s intending platforms to have to come up with relevant methodologies.
Identifying bots is not an exact science — as academics conducting research into how information spreads online could tell you. The current tools that exist for trying to spot bots typically involve rating accounts across a range of criteria to give a score of how likely an account is to be algorithmically controlled vs human controlled. But platforms do at least have a perfect view into their own systems, whereas academics have had to rely on the variable level of access platforms are willing to give them.
Another factor here is that given the sophisticated nature of some online disinformation campaigns — the state-sponsored and heavily resourced efforts by Kremlin backed entities such as Russia’s Internet Research Agency, for example — if the focus ends up being algorithmically controlled bots vs IDing bots that might have human agents helping or controlling them, plenty of more insidious disinformation agents could easily slip through the cracks.
That said, other measures in the EC’s proposals for platforms include stepping up their existing efforts to shutter fake accounts and being able to demonstrate the “effectiveness” of such efforts — so greater transparency around how fake accounts are identified and the proportion being removed (which could help surface more sophisticated human-controlled bot activity on platforms too).
Another measure from the package: The EC says it wants to see “significantly” improved scrutiny of ad placements — with a focus on trying to reduce revenue opportunities for disinformation purveyors.
Restricting targeting options for political advertising is another component. “Ensure transparency about sponsored content relating to electoral and policy-making processes,” is one of the listed objectives on its fact sheet — and ad transparency is something Facebook has said it’s prioritizing since revelations about the extent of Kremlin disinformation on its platform during the 2016 US presidential election, with expanded tools due this summer.
The Commission also says generally that it wants platforms to provide “greater clarity about the functioning of algorithms” and enable third-party verification — though there’s no greater level of detail being provided at this point to indicate how much algorithmic accountability it’s after from platforms.
We’ve asked for more on its thinking here and will update this story with any response. It looks to be seeking to test the water to see how much of the workings of platforms’ algorithmic blackboxes can be coaxed from them voluntarily — such as via measures targeting bots and fake accounts — in an attempt to stave off formal and more fulsome regulations down the line.
Filter bubbles also appear to be informing the Commission’s thinking, as it says it wants platforms to make it easier for users to “discover and access different news sources representing alternative viewpoints” — via tools that let users customize and interact with the online experience to “facilitate content discovery and access to different news sources”.
Though another stated objective is for platforms to “improve access to trustworthy information” — so there are questions about how those two aims can be balanced, i.e. without efforts towards one undermining the other. 
On trustworthiness, the EC says it wants platforms to help users assess whether content is reliable using “indicators of the trustworthiness of content sources”, as well as by providing “easily accessible tools to report disinformation”.
In one of several steps Facebook has taken since 2016 to try to tackle the problem of fake content being spread on its platform the company experimented with putting ‘disputed’ labels or red flags on potentially untrustworthy information. However the company discontinued this in December after research suggested negative labels could entrench deeply held beliefs, rather than helping to debunk fake stories.
Instead it started showing related stories — containing content it had verified as coming from news outlets its network of fact checkers considered reputable — as an alternative way to debunk potential fakes.
The Commission’s approach looks to be aligning with Facebook’s rethought approach — with the subjective question of how to make judgements on what is (and therefore what isn’t) a trustworthy source likely being handed off to third parties, given that another strand of the code is focused on “enabling fact-checkers, researchers and public authorities to continuously monitor online disinformation”.
Since 2016 Facebook has been leaning heavily on a network of local third party ‘partner’ fact-checkers to help identify and mitigate the spread of fakes in different markets — including checkers for written content and also photos and videos, the latter in an effort to combat fake memes before they have a chance to go viral and skew perceptions.
In parallel Google has also been working with external fact checkers, such as on initiatives such as highlighting fact-checked articles in Google News and search. 
The Commission clearly approves of the companies reaching out to a wider network of third party experts. But it is also encouraging work on innovative tech-powered fixes to the complex problem of disinformation — describing AI (“subject to appropriate human oversight”) as set to play a “crucial” role for “verifying, identifying and tagging disinformation”, and pointing to blockchain as having promise for content validation.
Specifically it reckons blockchain technology could play a role by, for instance, being combined with the use of “trustworthy electronic identification, authentication and verified pseudonyms” to preserve the integrity of content and validate “information and/or its sources, enable transparency and traceability, and promote trust in news displayed on the Internet”.
It’s one of a handful of nascent technologies the executive flags as potentially useful for fighting fake news, and whose development it says it intends to support via an existing EU research funding vehicle: The Horizon 2020 Work Program.
It says it will use this program to support research activities on “tools and technologies such as artificial intelligence and blockchain that can contribute to a better online space, increasing cybersecurity and trust in online services”.
It also flags “cognitive algorithms that handle contextually-relevant information, including the accuracy and the quality of data sources” as a promising tech to “improve the relevance and reliability of search results”.
The Commission is giving platforms until July to develop and apply the Code of Practice — and is using the possibility that it could still draw up new laws if it feels the voluntary measures fail as a mechanism to encourage companies to put the sweat in.
It is also proposing a range of other measures to tackle the online disinformation issue — including:
An independent European network of fact-checkers: The Commission says this will establish “common working methods, exchange best practices, and work to achieve the broadest possible coverage of factual corrections across the EU”; and says they will be selected from the EU members of the International Fact Checking Network which it notes follows “a strict International Fact Checking NetworkCode of Principles”
A secure European online platform on disinformation to support the network of fact-checkers and relevant academic researchers with “cross-border data collection and analysis”, as well as benefitting from access to EU-wide data
Enhancing media literacy: On this it says a higher level of media literacy will “help Europeans to identify online disinformation and approach online content with a critical eye”. So it says it will encourage fact-checkers and civil society organisations to provide educational material to schools and educators, and organise a European Week of Media Literacy
Support for Member States in ensuring the resilience of elections against what it dubs “increasingly complex cyber threats” including online disinformation and cyber attacks. Stated measures here include encouraging national authorities to identify best practices for the identification, mitigation and management of risks in time for the 2019 European Parliament elections. It also notes work by a Cooperation Group, saying “Member States have started to map existing European initiatives on cybersecurity of network and information systems used for electoral processes, with the aim of developing voluntary guidance” by the end of the year.  It also says it will also organise a high-level conference with Member States on cyber-enabled threats to elections in late 2018
Promotion of voluntary online identification systems with the stated aim of improving the “traceability and identification of suppliers of information” and promoting “more trust and reliability in online interactions and in information and its sources”. This includes support for related research activities in technologies such as blockchain, as noted above. The Commission also says it will ��explore the feasibility of setting up voluntary systems to allow greater accountability based on electronic identification and authentication scheme” — as a measure to tackle fake accounts. “Together with others actions aimed at improving traceability online (improving the functioning, availability and accuracy of information on IP and domain names in the WHOIS system and promoting the uptake of the IPv6 protocol), this would also contribute to limiting cyberattacks,” it adds
Support for quality and diversified information: The Commission is calling on Member States to scale up their support of quality journalism to ensure a pluralistic, diverse and sustainable media environment. The Commission says it will launch a call for proposals in 2018 for “the production and dissemination of quality news content on EU affairs through data-driven news media”
It says it will aim to co-ordinate its strategic comms policy to try to counter “false narratives about Europe” — which makes you wonder whether debunking the output of certain UK tabloid newspapers might fall under that new EC strategy — and also more broadly to tackle disinformation “within and outside the EU”.
Commenting on the proposals in a statement, the Commission’s VP for the Digital Single Market, Andrus Ansip, said: “Disinformation is not new as an instrument of political influence. New technologies, especially digital, have expanded its reach via the online environment to undermine our democracy and society. Since online trust is easy to break but difficult to rebuild, industry needs to work together with us on this issue. Online platforms have an important role to play in fighting disinformation campaigns organised by individuals and countries who aim to threaten our democracy.”
The EC’s next steps now will be bringing the relevant parties together — including platforms, the ad industry and “major advertisers” — in a forum to work on greasing cooperation and getting them to apply themselves to what are still, at this stage, voluntary measures.
“The forum’s first output should be an EU–wide Code of Practice on Disinformation to be published by July 2018, with a view to having a measurable impact by October 2018,” says the Commission. 
The first progress report will be published in December 2018. “The report will also examine the need for further action to ensure the continuous monitoring and evaluation of the outlined actions,” it warns.
And if self-regulation fails…
In a fact sheet further fleshing out its plans, the Commission states: “Should the self-regulatory approach fail, the Commission may propose further actions, including regulatory ones targeted at a few platforms.”
And for “a few” read: Mainstream social platforms — so likely the big tech players in the social digital arena: Facebook, Google, Twitter.
For potential regulatory actions tech giants only need look to Germany, where a 2017 social media hate speech law has introduced fines of up to €50M for platforms that fail to comply with valid takedown requests within 24 hours for simple cases, for an example of the kind of scary EU-wide law that could come rushing down the pipe at them if the Commission and EU states decide its necessary to legislate.
Though justice and consumer affairs commissioner, Vera Jourova, signaled in January that her preference on hate speech at least was to continue pursuing the voluntary approach — though she also said some Member State’s ministers are open to a new EU-level law should the voluntary approach fail.
In Germany the so-called NetzDG law has faced criticism for pushing platforms towards risk aversion-based censorship of online content. And the Commission is clearly keen to avoid such charges being leveled at its proposals, stressing that if regulation were to be deemed necessary “such [regulatory] actions should in any case strictly respect freedom of expression”.
Commenting on the Code of Practice proposals, a Facebook spokesperson told us: “People want accurate information on Facebook – and that’s what we want too. We have invested in heavily in fighting false news on Facebook by disrupting the economic incentives for the spread of false news, building new products and working with third-party fact checkers.”
A Twitter spokesman declined to comment on the Commission’s proposals but flagged contributions he said the company is already making to support media literacy — including an event last week at its EMEA HQ.
At the time of writing Google had not responded to a request for comment.
Last month the Commission did further tighten the screw on platforms over terrorist content specifically —  saying it wants them to get this taken down within an hour of a report as a general rule. Though it still hasn’t taken the step to cement that hour ‘rule’ into legislation, also preferring to see how much action it can voluntarily squeeze out of platforms via a self-regulation route.
  from Social – TechCrunch https://ift.tt/2vYxesQ Original Content From: https://techcrunch.com
0 notes
fitriautm-blog · 8 years ago
Text
Teknik Pengujian Perangkat Lunak
Definisi dari Pengujian atau Testing Proses eksekusi suatu program dengan maksud menemukan kesalahan. Sebuah ujicoba kasus yang baik adalah yang memiliki probabilitas yang tinggi dalam menemukan kesalahan-kesalahan yang belum terungkap. Ujicoba yang berhasil adalah yang mengungkap kesalahan yang belum ditemukan. Proses Testing System Testing Pengujian terhadap integrasi sub-system, yaitu keterhubungan antar sub-system. Acceptance Testing Pengujian terakhir sebelum sistem dipakai oleh user. Melibatkan pengujian dengan data dari pengguna sistem. Component Testing Pengujian komponen-komponen program Biasanya dilakukan oleh component developer (kecuali utk system kritis) Integration Testing Pengujian kelompok komponen-komponen yang terintegrasi untuk membentuk sub-system ataupun system Dilakukan oleh tim penguji yang independent Pengujian berdasarkan spesifikasi sistem Rencana Pengujian Proses testing Deskripsi fase-fase utama dalam pengujian Pelacakan Kebutuhan Semua kebutuhan user diuji secara individu Item yg diuji Menspesifikasi komponen sistem yang diuji Jadual Testing Prosedur Pencatatan Hasil dan Prosedur Kebutuhan akan Hardware dan Software Kendala-kendala Mis: kekuranga staff, alat, waktu dll. Teknik-Teknik Pengujian White box testing Black-box testing White box testing White box testing didasarkan pada pemeriksaan detail prosedural. Alur logikal suatu software diujicoba dengan menyediakan kasus ujicoba yang melakukan sekumpulan kondisi dan/atau perulangan tertentu. Status dari program dapat diperiksa pada beberapa titik yang bervariasi untuk menentukan apakah status yang diharapkan atau ditegaskan sesuai dengan status sesungguhnya. Black-box testing Ujicoba yang dilakukan pada interface software. Walaupun didesain untuk menemukan kesalahan, ujicoba blackbox digunakan untuk mendemonstrasikan fungsi software yang dioperasikan; apakah input diterima dengan benar, dan ouput yang dihasilkan benar; apakah integritas informasi eksternal terpelihara. Ujicoba blackbox memeriksa beberapa aspek sistem, tetapi memeriksa sedikit mengenai struktur logikal internal software. Pengujian (testing) sistem informasi Testing saat Input Data Tindakan untuk menguji edit dan kontrol dalam pemasukan data, misalnya : validasi, cek digit. Testing saat Pemrosesan Bertujuan untuk meyakinkan bahwa program telah bekerja seperti yang diharapkan. Testing saat Output. Testing saat Output berguna untuk meyakinkan bahwa laporan yang dihasilkan telah dibuat dengan format yang benar dan mempunyai informasi yang valid. Pengontrolan saat input data, yaitu : 1. Character checks (pengecekan karakter) Suatu tindakan untuk melihat apakah suatu field itu dapat menerima karakter tertentu saja atau tidak. 2. Numeric Value Checks (Pengecekan Nilai Numerik) Ini merupakan character checks yang dikhususkan pada karakter bilangan. 3. Check Digit (Digit cek) Sejumlah angka mempunyai check digit, maka sistem akan menolak sembarang masukan yang banyaknya angka (digit) tidak akurat. 4. Limit Tests (Pengujian Batas) Pada suatu field, kadang-kadang nilai yang harus diisikan terbatas jangkauan nilainya. 5. Reasonableness Tests (Pengujian Kelogisan) Seperti pada limit test, tetapi pembatasannya pada hal yang logis (beralasan). 6. Internal Compatibility (KompatibilitasInternal) Suatu data yang sudah dimasukkan, sebaiknya kompatibel dengan data yang lain dalam satu aplikasi tertentu. 7. Cross Checks with data in other applications Suatu data akan diperiksa secara silang dengan aplikasi yang lain, sehingga jika terjadi kesalahan maka pesan kesalahan akan diberikan. Ini bermaksud untuk memeriksa apakah fungsinya telah berjalan dengan benar. 8. Duplicate Transactions (Transaksi Ganda) Suatu system sebaiknya dibuat dapat menolak transaksi yang ganda. 9. Table Look Ups jika suatu kode tertentu dimasukkan dalam suatu field, maka system akan mengakses table yang tepat dan memberikan informasi yang benar. 10. Existence of Required Data (Keberadaan Data yg Dibutuh) Jika suatu data yang dibutuhkan tidak ada, maka system harus memberikan pesan bahwa data yang dibutuhkan tidak ada. Jadi harus ada kejelasan tentang data. 11. Confirmation Screens (Layar Konfirmasi) Suatu tampilan yang akan memberikan konfirmasi bahwa data yang dimasukkan adalah data yang benar. 12. Field lengths and Overflow checks (Cek panjang field dan overflow) Panjang field dapat diberikan dengan ukuran tertentu. Begitu juga dengan banyaknya data yang dapat dimasukkan dalam suatu field. TESTING SAAT PEMROSESAN DATA 1. Delete vs Reverse (Hapus lawan Mundur) Kita harus tahu bahwa data yang telah dimasukkan, nantinya akan dapat dihapus atau ditelusuri mundur (historisnya). Ketika suatu transaksi telah diupdate (dimutakhirkan), transaksi itu mungkin perlu dihapus atau ditelusuri ke belakang (mundur) 2. Automatically Triggered Processing (Pemrosesan Terpicu Secara Otomatis) Jika suatu system mempunyai sifat automatically triggered processing, maka testing yang dilakukan harusnya dapat meyakinkan bahwa kalkulasi atau pemrosesan telah dilakukan dengan benar. Dalam hal ini testing dilakukan untuk meyakinkan bahwa parameter yang tepat telah digunakan dan output dari pemrosesan telah akurat. 3. Updating (Pemutakhiran) Testing dijalankan untuk meyakinkan bahwa sistem telah di-update secara benar dengan data yang telah dimasukkan. 4. Audit Trails (Jejak Pemeriksaan) Seperti pada nomor 3, maka testing dilakukan untuk meyakinkan bahwa log system dan jejak untuk pemeriksaan telah bekerja dengan baik. 5. Table Values (Nilai Tabel) Testing ini dijalankan pada prosedur untuk peng-updatean parameter system dan tabel kode. Pelaksanaan dari update transaksi harus bekerja dengan tepat, termasuk dalam hal ini adalah pengeditan pemasukan data. 6. Arithmetic Calculations (Kalkulasi Aritmetika) Tes ini ditujukan kepada semua kalkulasi aritmetika yang telah dilakukan sudah benar sesuai rumus yang diinginkan. Untuk meyakinkan report telah benar, maka jika perlu ada cek silang dengan kalkulasi yang telah dilakukan. 7. Database Management System Testing Struktur database juga harus diuji untuk meyakinkan bahwa disain telah dibuat dengan benar. TESTING OUTPUT 1. Ringkasan laporan dapat diperiksa sebagai hal untuk meyakinkan bahwa format dan isi dari laporan sesuai yang dibutuhkan. 2. Yakinlah bahwa dalam laporan hal-hal yang bersifat aritmetika telah berjalan dengan benar. 3. Sebaiknya seluruh laporan harus disajikan dan lihatlah ada kesalahan atau tidak dengan laporan sebelumnya. 4. Jika ada laporan yang bersifat khusus, maka testing juga harus dapat menjawab bahwa laporan khusus ini ditujukan untuk meyakinkan bahwa data yang diekstrak dari suatu tempat harus cocok dan lengkap dengan kriteria khusus yang diharapkan.
0 notes
endenogatai · 7 years ago
Text
Data experts on Facebook’s GDPR changes: Expect lawsuits
Make no mistake: Fresh battle lines are being drawn in the clash between data-mining tech giants and Internet users over people’s right to control their personal information and protect their privacy.
An update to European Union data protection rules next month — called the General Data Protection Regulation — is the catalyst for this next chapter in the global story of tech vs privacy.
A fairytale ending would remove that ugly ‘vs’ and replace it with an enlightened ‘+’. But there’s no doubt it will be a battle to get there — requiring legal challenges and fresh case law to be set down — as an old guard of dominant tech platforms marshal their extensive resources to try to hold onto the power and wealth gained through years of riding roughshod over data protection law.
Payback is coming though. Balance is being reset. And the implications of not regulating what tech giants can do with people’s data has arguably never been clearer.
The exciting opportunity for startups is to skate to where the puck is going — by thinking beyond exploitative legacy business models that amount to embarrassing blackboxes whose CEOs dare not publicly admit what the systems really do — and come up with new ways of operating and monetizing services that don’t rely on selling the lie that people don’t care about privacy.
  More than just small print
Right now the EU’s General Data Protection Regulation can take credit for a whole lot of spilt ink as tech industry small print is reworded en masse. Did you just receive a T&C update notification about a company’s digital service? Chances are it’s related to the incoming standard.
The regulation is generally intended to strengthen Internet users’ control over their personal information, as we’ve explained before. But its focus on transparency — making sure people know how and why data will flow if they choose to click ‘I agree’ — combined with supersized fines for major data violations represents something of an existential threat to ad tech processes that rely on pervasive background harvesting of users’ personal data to be siphoned biofuel for their vast, proprietary microtargeting engines.
This is why Facebook is not going gentle into a data processing goodnight.
Indeed, it’s seizing on GDPR as a PR opportunity — shamelessly stamping its brand on the regulatory changes it lobbied so hard against, including by taking out full page print ads in newspapers…
Here we are. Wow, what a fun thinking about all these years of debates with fb representatives telling me ‘consumers don’t want privacy rights anymore’ and ‘a startup (sic) like facebook shouldn’t be overburdened’. #GDPR #dataprotection #privacy https://t.co/gowYVvKjJf
— Jan Philipp Albrecht (@JanAlbrecht) April 15, 2018
This is of course another high gloss plank in the company’s PR strategy to try to convince users to trust it — and thus to keep giving it their data. Because — and only because — GDPR gives consumers more opportunity to lock down access to their information and close the shutters against countless prying eyes.
But the pressing question for Facebook — and one that will also test the mettle of the new data protection standard — is whether or not the company is doing enough to comply with the new rules.
One important point re: Facebook and GDPR is that the standard applies globally, i.e. for all Facebook users whose data is processed by its international entity, Facebook Ireland (and thus within the EU); but not necessarily universally — with Facebook users in North America not legally falling under the scope of the regulation.
Users in North America will only benefit if Facebook chooses to apply the same standard everywhere. (And on that point the company has stayed exceedingly fuzzy.)
It has claimed it won’t give US and Canadian users second tier status vs the rest of the world where their privacy is concerned — saying they’re getting the same “settings and controls” — but unless or until US lawmakers spill some ink of their own there’s nothing but an embarrassing PR message to regulate what Facebook chooses to do with Americans’ data. It’s the data protection principles, stupid.
Zuckerberg was asked by US lawmakers last week what kind of regulation he would and wouldn’t like to see laid upon Internet companies — and he made a point of arguing for privacy carve outs to avoid falling behind, of all things, competitors in China.
Which is an incredibly chilling response when you consider how few rights — including human rights — Chinese citizens have. And how data-mining digital technologies are being systematically used to expand Chinese state surveillance and control.
The ugly underlying truth of Facebook’s business is that it also relies on surveillance to function. People’s lives are its product.
That’s why Zuckerberg couldn’t tell US lawmakers to hurry up and draft their own GDPR. He’s the CEO saddled with trying to sell an anti-privacy, anti-transparency position — just as policymakers are waking up to what that really means.
  Plus ça change?
Facebook has announced a series of updates to its policies and platform in recent months, which it’s said are coming to all users (albeit in ‘phases’). The problem is that most of what it’s proposing to achieve GDPR compliance is simply not adequate.
Coincidentally many of these changes have been announced amid a major data mishandling scandal for Facebook, in which it’s been revealed that data on up to 87M users was passed to a political consultancy without their knowledge or consent.
It’s this scandal that led Zuckerberg to be perched on a booster cushion in full public view for two days last week, dodging awkward questions from US lawmakers about how his advertising business functions.
He could not tell Congress there wouldn’t be other such data misuse skeletons in its closet. Indeed the company has said it expects it will uncover additional leaks as it conducts a historical audit of apps on its platform that had access to “a large amount of data”. (How large is large, one wonders… )
But whether Facebook’s business having enabled — in just one example — the clandestine psychological profiling of millions of Americans for political campaign purposes ends up being the final, final straw that catalyzes US lawmakers to agree their own version of GDPR is still tbc.
Any new law will certainly take time to formulate and pass. In the meanwhile GDPR is it.
The most substantive GDPR-related change announced by Facebook to date is the shuttering of a feature called Partner Categories — in which it allowed the linking of its own information holdings on people with data held by external brokers, including (for example) information about people’s offline activities.
Evidently finding a way to close down the legal liabilities and/or engineer consent from users to that degree of murky privacy intrusion — involving pools of aggregated personal data gathered by goodness knows who, how, where or when — was a bridge too far for the company’s army of legal and policy staffers.
Other notable changes it has so far made public include consolidating settings onto a single screen vs the confusing nightmare Facebook has historically required users to navigate just to control what’s going on with their data (remember the company got a 2011 FTC sanction for “deceptive” privacy practices); rewording its T&Cs to make it more clear what information it’s collecting for what specific purpose; and — most recently — revealing a new consent review process whereby it will be asking all users (starting with EU users) whether they consent to specific uses of their data (such as processing for facial recognition purposes).
As my TC colleague Josh Constine wrote earlier in a critical post dissecting the flaws of Facebook’s approach to consent review, the company is — at very least — not complying with the spirit of GDPR’s law.
Indeed, Facebook appears pathologically incapable of abandoning its long-standing modus operandi of socially engineering consent from users (doubtless fed via its own self-reinforced A/B testing ad expertise). “It feels obviously designed to get users to breeze through it by offering no resistance to continue, but friction if you want to make changes,” was his summary of the process.
But, as we’ve pointed out before, concealment is not consent.
To get into a few specifics, pre-ticked boxes — which is essentially what Facebook is deploying here, with a big blue “accept and continue” button designed to grab your attention as it’s juxtaposed against an anemic “manage data settings” option (which if you even manage to see it and read it sounds like a lot of tedious hard work) — aren’t going to constitute valid consent under GDPR.
Nor is this what ‘privacy by default’ looks like — another staple principle of the regulation. On the contrary, Facebook is pushing people to do the opposite: Give it more of their personal information — and fuzzing why it’s asking by bundling a range of usage intentions.
The company is risking a lot here.
In simple terms, seeking consent from users in a way that’s not fair because it’s manipulative means consent is not being freely given. Under GDPR, it won’t be consent at all. So Facebook appears to be seeing how close to the wind it can fly to test how regulators will respond.
Safe to say, EU lawmakers and NGOs are watching.
  “Yes, they will be taken to court”
“Consent should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment,” runs one key portion of GDPR.
Now compare that with: “People can choose to not be on Facebook if they want” — which was Facebook’s deputy chief privacy officer, Rob Sherman’s, paper-thin defense to reporters for the lack of an overall opt out for users to its targeted advertising.
Data protection experts who TechCrunch spoke to suggest Facebook is failing to comply with, not just the spirit, but the letter of the law here. Some were exceeding blunt on this point.
“I am less impressed,” said law professor Mireille Hildebrandt discussing how Facebook is railroading users into consenting to its targeted advertising. “It seems they have announced that they will still require consent for targeted advertising and refuse the service if one does not agree. This violates [GDPR] art. 7.4 jo recital 43. So, yes, they will be taken to court.”
Facebook says users must accept targeted ads even under new EU law: NO THEY MUST NOT, there are other types of advertising, subscription etc. https://t.co/zrUgsgxtwo
— Mireille Hildebrandt (@mireillemoret) April 18, 2018
“Zuckerberg appears to view the combination of signing up to T&Cs and setting privacy options as ‘consent’,” adds cyber security professor Eerke Boiten. “I doubt this is explicit or granular enough for the personal data processing that FB do. The default settings for the privacy settings certainly do not currently provide for ‘privacy by default’ (GDPR Art 25).
“I also doubt whether FB Custom Audiences work correctly with consent. FB finds out and retains a small bit of personal info through this process (that an email address they know is known to an advertiser), and they aim to shift the data protection legal justification on that to the advertisers. Do they really then not use this info for future profiling?”
That looming tweak to the legal justification of Facebook’s Custom Audiences feature — a product which lets advertisers upload contact lists in a hashed form to find any matches among its own user-base (so those people can be targeted with ads on Facebook’s platform) — also looks problematical.
Here the company seems to be intending to try to claim a change in the legal basis, pushed out via new terms in which it instructs advertisers to agree they are the data controller (and it is merely a data processor). And thereby seek to foist a greater share of the responsibility for obtaining consent to processing user data onto its customers.
However such legal determinations are simply not a matter of contract terms. They are based on the fact of who is making decisions about how data is processed. And in this case — as other experts have pointed out — Facebook would be classed as a joint controller with any advertisers that upload personal data. The company can’t use a T&Cs change to opt out of that.
Wishful thinking is not a reliable approach to legal compliance.
  Fear and manipulation of highly sensitive data
Over many years of privacy-hostile operation, Facebook has shown it has a major appetite for even very sensitive data. And GDPR does not appear to have blunted that.
Let’s not forget, facial recognition was a platform feature that got turned off in the EU, thanks to regulatory intervention. Yet here Facebook is now trying to use GDPR as a route to process this sensitive biometric data for international users after all — by pushing individual users to consent to it by dangling a few ‘feature perks’ at the moment of consent.
Veteran data protection and privacy consultant, Pat Walshe, is unimpressed.
“The sensitive data tool appears to be another data grab,” he tells us, reviewing Facebook’s latest clutch of ‘GDPR changes’. “Note the subtlety. It merges ‘control of sharing’ such data with FB’s use of the data “to personalise features and products”. From the info available that isn’t sufficient to amount to consent for such sensitive data and nor is it clear folks can understand the broader implications of agreeing.
“Does it mean ads will appear in Instagram? WhatsApp etc? The default is also set to ‘accept’ rather than ‘review and consider’. This is really sensitive data we’re talking about.”
“The face recognition suggestions are woeful,” he continues. “The second image — is using an example… to manipulate and stoke fear — “we can’t protect you”.
“Also, the choices and defaults are not compatible with [GDPR] Article 25 on data protection by design and default nor Recital 32… If I say no to facial recognition it’s unclear if other users can continue to tag me.”
Of course it goes without saying that Facebook users will keep uploading group photos, not just selfies. What’s less clear is whether Facebook will be processing the faces of other people in those shots who have not given (and/or never even had the opportunity to give) consent to its facial recognition feature.
People who might not even be users of its product.
But if it does that it will be breaking the law. Yet Facebook does indeed profile non-users — despite Zuckerberg’s claims to Congress not to know about its shadow profiles. So the risk is clear.
It can’t give non-users “settings and controls” not to have their data processed. So it’s already compromised their privacy — because it never gained consent in the first place.
New Mexico Representative Ben Lujan made this point to Zuckerberg’s face last week and ended the exchange with a call to action: “So you’re directing people that don’t even have a Facebook page to sign up for a Facebook page to access their data… We’ve got to change that.”
WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)
But nothing in the measures Facebook has revealed so far, as its ‘compliance response’ to GDPR, suggest it intends to pro-actively change that.
Walshe also critically flags how — again, at the point of consent — Facebook’s review process deploys examples of the social aspects of its platform (such as how it can use people’s information to “suggest groups or other features or products”) as a tactic for manipulating people to agree to share religious affiliation data, for example.
“The social aspect is not separate to but bound up in advertising,” he notes, adding that the language also suggests Facebook uses the data.
Again, this whiffs a whole lot more than smells like GDPR compliance.
“I don’t believe FB has done enough,” adds Walshe, giving a view on Facebook’s GDPR preparedness ahead of the May 25 deadline for the framework’s application — as Zuckerberg’s Congress briefing notes suggested the company itself believes it has. (Or maybe it just didn’t want to admit to Congress that U.S. Facebook users will get lower privacy standards vs users elsewhere.)
“In fact I know they have not done enough. Their business model is skewed against privacy — privacy gets in the way of advertising and so profit. That’s why Facebook has variously suggested people may have to pay if they want an ad free model & so ‘pay for privacy’.”
“On transparency, there is a long way to go,” adds Boiten. “Friend suggestions, profiling for advertising, use of data gathered from like buttons and web pixels (also completely missing from “all your Facebook data”), and the newsfeed algorithm itself are completely opaque.”
“What matters most is whether FB’s processing decisions will be GDPR compliant, not what exact controls are given to FB members,” he concludes.
US lawmakers also pumped Zuckerberg on how much of the information his company harvests on people who have a Facebook account is revealed to them when they ask for it — via its ‘Download your data’ tool.
His answers on this appeared to intentionally misconstrue what was being asked — presumably in a bid to mask the ugly reality of the true scope and depth of the surveillance apparatus he commands. (Sometimes with a few special ‘CEO privacy privileges’ thrown in — like being able to selectively retract just his own historical Facebook messages from conversations, ahead of bringing the feature to anyone else.)
‘Download your Data’ is clearly partial and self-serving — and thus it also looks very far from being GDPR compliant.
  Not even half the story
Facebook is not even complying with the spirit of current EU data protection law on data downloads. Subject Access Requests give individuals the right to request not just the information they have voluntarily uploaded to a service, but also personal data the company holds about them; Including giving a description of the personal data; the reasons it is being processed; and whether it will be given to any other organizations or people.
Facebook not only does not include people’s browsing history in the info it provides when you ask to download your data — which, incidentally, its own cookies policy confirms it tracks (via things like social plug-ins and tracking pixels on millions of popular websites etc etc) — it also does not include a complete list of advertisers on its platform that have your information.
Instead, after a wait, it serves up an eight-week snapshot. But even this two month view can still stretch to hundreds of advertisers per individual.
If Facebook gave users a comprehensive list of advertisers’ access to their information the number of third party companies would clearly stretch into the thousands. (In some cases thousands might even be a conservative estimate.)
There’s plenty of other information harvested from users that Facebook also intentionally fails to divulge via ‘Download your data’. And — to be clear — this isn’t a new problem either. The company has a very long history of blocking these type of requests.
In the EU it currently invokes a exception in Irish law to circumvent more fulsome compliance — which, even setting GDPR aside, raises some interesting competition law questions, as Paul-Olivier Dehaye told the UK parliament last month.
“All your Facebook data” isn’t a complete solution,” agrees Boiten. “It misses the info Facebook uses for auto-completing searches; it misses much of the information they use for suggesting friends; and I find it hard to believe that it contains the full profiling information.”
“Ads Topics” looks rather random and undigested, and doesn’t include the clear categories available to advertisers,” he further notes.
Facebook wouldn’t comment publicly about this when we asked. But it maintains its approach towards data downloads is GDPR compliant — and says it’s reviewed what it offers via with regulators to get feedback.
Earlier this week it also put out a wordy blog post attempting to diffuse this line of attack by pointing the finger of blame at the rest of the tech industry — saying, essentially, that a whole bunch of other tech giants are at it too.
Which is not much of a moral defense even if the company believes its lawyers can sway judges with it. (Ultimately I wouldn’t fancy its chances; the EU’s top court has a robust record of defending fundamental rights.)
  Think of the children…
What its blog post didn’t say — yet again — was anything about how all the non-users it nonetheless tracks around the web are able to have any kind of control over its surveillance of them.
And remember, some Facebook non-users will be children.
So yes, Facebook is inevitably tracking kids’ data without parental consent. Under GDPR that’s a majorly big no-no.
TC’s Constine had a scathing assessment of even the on-platform system that Facebook has devised in response to GDPR’s requirements on parental consent for processing the data of users who are between the ages of 13 and 15.
“Users merely select one of their Facebook friends or enter an email address, and that person is asked to give consent for their ‘child’ to share sensitive info,” he observed. “But Facebook blindly trusts that they’ve actually selected their parent or guardian… [Facebook’s] Sherman says Facebook is “not seeking to collect additional information” to verify parental consent, so it seems Facebook is happy to let teens easily bypass the checkup.”
So again, the company is being shown doing the minimum possible — in what might be construed as a cynical attempt to check another compliance box and carry on its data-sucking business as usual.
Given that intransigence it really will be up to the courts to bring the enforcement stick. Change, as ever, is a process — and hard won.
Hildebrandt is at least hopeful that a genuine reworking of Internet business models is on the way, though — albeit not overnight. And not without a fight.
“In the coming years the landscape of all this silly microtargeting will change, business models will be reinvented and this may benefit both the advertisers, consumers and citizens,” she tells us. “It will hopefully stave off the current market failure and the uprooting of democratic processes… Though nobody can predict the future, it will require hard work.”
from RSSMix.com Mix ID 8204425 https://ift.tt/2H9ZmOS via IFTTT
0 notes