#but some of these recommendations have similar problems which seems... unusual? redundant?
Explore tagged Tumblr posts
our-ensemble · 5 months ago
Text
I'm grateful for all the media analysis there has been regarding Enstars and it's treatment of social issues, especially after AKATSUKI's event, but truthfully, acting as if Enstars is the only piece of media to ever have "problematic" aspects cheapens some of this analysis.
For many fans, Enstars passed the line - but the important thing to remember is that media which has not passed the line is not media that is "unproblematic", but media that has not reached a point where moments that are "tactless" or "poor in taste" follow specific and hostile ideologies.
Again, the way that people have been rethinking their engagement with Enstars is commendable since its not something that everybody is willing to consider; however, if this sort of thinking isn't applied to other spaces, then we risk running into the same problems over and over across time.
Enstars' problems aren't unique to it - and are likely being inherited from other media, so its definitely a system to be mindful of - which is why I hope that the lessons that we're picking up from Enstars are also applied elsewhere.
By applying this level of analysis and conscientiousness towards original works and fan spaces, hopefully we create respectful places of discussion where the ideologies that hurt people in real-life aren't also being used to hurt them in fiction and fandom.
25 notes · View notes
hydrus · 7 years ago
Text
Version 324
youtube
windows
zip
exe
os x
app
tar.gz
linux
tar.gz
source
tar.gz
I had a great week. The downloader overhaul is almost done.
pixiv
Just as Pixiv recently moved their art pages to a new phone-friendly, dynamically drawn format, they are now moving their regular artist gallery results to the same system. If your username isn't switched over yet, it likely will be in the coming week.
The change breaks our old html parser, so I have written a new downloader and json api parser. The way their internal api works is unusual and over-complicated, so I had to write a couple of small new tools to get it to work. However, it does seem to work again.
All of your subscriptions and downloaders will try to switch over to the new downloader automatically, but some might not handle it quite right, in which case you will have to go into edit subscriptions and update their gallery manually. You'll get a popup on updating to remind you of this, and if any don't line up right automatically, the subs will notify you when they next run. The api gives all content--illustrations, manga, ugoira, everything--so there unfortunately isn't a simple way to refine to just one content type as we previously could. But it does neatly deliver everything in just one request, so artist searching is now incredibly faster.
Let me know if pixiv gives any more trouble. Now we can parse their json, we might be able to reintroduce the arbitrary tag search, which broke some time ago due to the same move to javascript galleries.
twitter
In a similar theme, given our fully developed parser and pipeline, I have now wangled a twitter username search! It should be added to your downloader list on update. It is a bit hacky and may be ultimately fragile if they change something their end, but it otherwise works great. It discounts retweets and fetches 19/20 tweets per gallery 'page' fetch. You should be able to set up subscriptions and everything, although I generally recommend you go at it slowly until we know this new parser works well. BTW: I think twitter only 'browses' 3200 tweets in the past, anyway. Note that tweets with no images will be 'ignored', so any typical twitter search will end up with a lot of 'Ig' results--this is normal. Also, if the account ever retweets more than 20 times in a row, the search will stop there, due to how the clientside pipeline works (it'll think that page is empty).
Again, let me know how this works for you. This is some fun new stuff for hydrus, and I am interested to see where it does well and badly.
misc
In order to be less annoying, the 'do you want to run idle jobs?' on shutdown dialog will now only ask at most once per day! You can edit the time unit under options->maintenance and processing.
Under options->connection, you can now change max total network jobs globally and per domain. The defaults are 15 and 3. I don't recommend you increase them unless you know what you are doing, but if you want a slower/more cautious client, please do set them lower.
The new advanced downloader ui has a bunch of quality of life improvements, mostly related to the handling of example parseable data.
full list
downloaders:
after adding some small new parser tools, wrote a new pixiv downloader that should work with their new dynamic gallery's api. it fetches all an artist's work in one page. some existing pixiv download components will be renamed and detached from your existing subs and downloaders. your existing subs may switch over to the correct pixiv downloader automatically, or you may need to manually set them (you'll get a popup to remind you).
wrote a twitter username lookup downloader. it should skip retweets. it is a bit hacky, so it may collapse if they change something small with their internal javascript api. it fetches 19-20 tweets per 'page', so if the account has 20 rts in a row, it'll likely stop searching there. also, afaik, twitter browsing only works back 3200 tweets or so. I recommend proceeding slowly.
added a simple gelbooru 0.1.11 file page parser to the defaults. it won't link to anything by default, but it is there if you want to put together some booru.org stuff
you can now set your default/favourite download source under options->downloading
.
misc:
the 'do idle work on shutdown' system will now only ask/run once per x time units (including if you say no to the ask dialog). x is one day by default, but can be set in 'maintenance and processing'
added 'max jobs' and 'max jobs per domain' to options->connection. defaults remain 15 and 3
the colour selection buttons across the program now have a right-click menu to import/export #FF0000 hex codes from/to the clipboard
tag namespace colours and namespace rendering options are moved from 'colours' and 'tags' options pages to 'tag summaries', which is renamed to 'tag presentation'
the Lain import dropper now supports pngs with single gugs, url classes, or parsers--not just fully packaged downloaders
fixed an issue where trying to remove a selection of files from the duplicate system (through the advanced duplicates menu) would only apply to the first pair of files
improved some error reporting related to too-long filenames on import
improved error handling for the folder-scanning stage in import folders--now, when it runs into an error, it will preserve its details better, notify the user better, and safely auto-pause the import folder
png export auto-filenames will now be sanitized of \, /, :, *-type OS-path-invalid characters as appropriate as the dialog loads
the 'loading subs' popup message should appear more reliably (after 1s delay) if the first subs are big and loading slow
fixed the 'fullscreen switch' hover window button for the duplicate filter
deleted some old hydrus session management code and db table
some other things that I lost track of. I think it was mostly some little dialog fixes :/
.
advanced downloader stuff:
the test panel on pageparser edit panels now has a 'post pre-parsing conversion' notebook page that shows the given example data after the pre-parsing conversion has occurred, including error information if it failed. it has a summary size/guessed type description and copy and refresh buttons.
the 'raw data' copy/fetch/paste buttons and description are moved down to the raw data page
the pageparser now passes up this post-conversion example data to sub-objects, so they now start with the correctly converted example data
the subsidiarypageparser edit panel now also has a notebook page, also with brief description and copy/refresh buttons, that summarises the raw separated data
the subsidiary page parser now passes up the first post to its sub-objects, so they now start with a single post's example data
content parsers can now sort the strings their formulae get back. you can sort strict lexicographic or the new human-friendly sort that does numbers properly, and of course you can go ascending or descending--if you can get the ids of what you want but they are in the wrong order, you can now easily fix it!
some json dict parsing code now iterates through dict keys lexicographically ascending by default. unfortunately, due to how the python json parser I use works, there isn't a way to process dict items in the original order
the json parsing formula now uses a string match when searching for dictionary keys, so you can now match multiple keys here (as in the pixiv illusts|manga fix). existing dictionary key look-ups will be converted to 'fixed' string matches
the json parsing formula can now get the content type 'dictionary keys', which will fetch all the text keys in the dictionary/Object, if the api designer happens to have put useful data in there, wew
formulae now remove newlines from their parsed texts before they are sent to the StringMatch! so, if you are grabbing some multi-line html and want to test for 'Posted: ' somewhere in that mess, it is now easy.
next week
After slaughtering my downloader overhaul megajob of redundant and completed issues (bringing my total todo from 1568 down to 1471!), I only have 15 jobs left to go. It is mostly some quality of life stuff and refreshing some out of date help. I should be able to clear most of them out next week, and the last few can be folded into normal work.
So I am now planning the login manager. After talking with several users over the past few weeks, I think it will be fundamentally very simple, supporting any basic user/pass web form, and will relegate complicated situations to some kind of improved browser cookies.txt import workflow. I suspect it will take 3-4 weeks to hash out, and then I will be taking four weeks to update to python 3, and then I am a free agent again. So, absent any big problems, please expect the 'next big thing to work on poll' to go up around the end of October, and for me to get going on that next big thing at the end of November. I don't want to finalise what goes on the poll yet, but I'll open up a full discussion as the login manager finishes.
1 note · View note
robertrluc85 · 8 years ago
Text
How to ensure your external PPC account audit isn’t a waste of time
If you run a PPC agency, you’ll know it’s not that unusual for clients to occasionally bring in an outside auditor to review their PPC accounts.
Sometimes, your client will let you know in advance; sometimes, you’ll find out when you see a request to access the account.
And sometimes, you won’t find out until after the fact, when the final report is forwarded to you for discussion!
I completely understand why some clients like to have an outside audit of their PPC accounts. For some companies, it’s simply part of their due diligence. For others, an executive will come up with the idea and push it through. And for some, it’s impossible to resist the allure of a “free” audit.
I can also understand why clients might hesitate to inform their PPC agency of their decision. They might feel embarrassed or uncomfortable about the situation. Or they may feel ambivalent about the audit itself.
In some cases, it may be that the client doesn’t trust the agency not to do some quick “fixes” in anticipation of the audit. (Although I have to say, if you don’t trust your agency enough to let them know of the audit in advance, you definitely shouldn’t trust them to run your campaigns!)
But whatever the situation, external audits are something that PPC agencies have to expect. But what’s it like to go through one? And how could the process be improved?
Today, I’m going to tell you about a recent external audit one of our clients initiated and some of the issues the process raised.
When your client brings in an external auditor
In this case, our client let me know up front that they were bringing in an external auditor, which I appreciated. But at the same time, I was rather surprised, too. This was an account we’d held for about five years, and we had good communication with them. Moreover, we’d gotten them some excellent results, and everyone seemed very happy all round.
As we learned later, the audit came about because a different executive in the company had been approached with the offer of a free PPC audit, and he felt the company had nothing to lose. So they agreed to it.
Meanwhile, my contact at the company reassured me that they were happy with our work. She said they had worked with “good” and “bad” agencies before and knew the difference. She also recognized that the outside auditor wasn’t entirely neutral in this process. (Was this free audit a marketing strategy by the auditor? We weren’t sure. But assuredly, any “free” audit has strings attached.)
At the same time, I reminded myself that my agency had never lost a client due to an audit (knock wood!). More importantly, we had nothing to hide, and I had total confidence in my team and our work.
And who knows? Maybe the report would have some helpful recommendations. Having a fresh set of eyes on an account is never a bad idea.
Besides, how detailed would a “free” audit be?
A few days later, my client presented me with the report. And it was huge! It ran about 35 pages and was very detailed and thorough. At first, I was excited. Surely this would yield all kinds of valuable information! But once I started to dig into it, my enthusiasm started to flag.
Because as it turned out, the report suffers from two major problems:
It mostly regurgitates what’s currently happening with the account.
It contains a lot of incorrect assumptions.
Problem #1: A regurgitation of existing data
Unfortunately, the report didn’t contain anything surprising or new. It was mostly a detailed recounting of what was currently happening with the account. And of course, we already knew what was happening with the account.
If my client had asked, I could have easily filled her in on account details without going to an outside auditor. And my team and I do make a concerted effort to communicate with our clients. We usually have weekly or bimonthly standing calls with them, and we also provide them with relevant reports.
Is it possible that the client was looking for information we weren’t providing? Possibly. But again, if we had been alerted to this need, we would have been more than happy to provide it. (If nothing else, the lesson here is to occasionally check in with the client to see if they want more detailed, or different, reporting.)
Much more problematic than the redundancy in the report was its lack of recommendations. The vast bulk of the report was focused on current account status, not suggestions for changes or improvements ��� which seemed like a lost opportunity.
Problem #2: Incorrect assumptions
Another major issue with the report was that many of its conclusions were based on incorrect assumptions.
The auditor lacked the context to clearly understand what was going on with the account. Repeatedly, the auditor found “errors” that weren’t errors at all — which he would have known if he’d had more background information.
Without this context, the value of the whole audit exercise comes into question.
What kind of information was the auditor lacking? I can think of four specific areas the auditor should have inquired about before even logging into AdWords:
1. What is the company’s business? What are its goals?
Whenever we land a new client, we ask the owner or marketing team to complete an onboarding questionnaire. The questionnaire allows us to better understand their business and its goals. It only seems logical that an auditor would go through a similar process.
After all, how can you audit a PPC account when you know little about the company?
We can also extend this “context for understanding” to PPC tools. Not everything happens in AdWords. In this case, my team and I were using Google Analytics for some of our tracking, and the auditor missed this point completely.
2. What tests are the agency currently running?
As an agency, we use labels religiously to clarify what we’re doing in client accounts — especially in terms of testing. But not all agencies do. And even so, it can be impossible to capture the complexity of these tests in one little label. Auditors would need to get more detailed information outside of the account to fully understand what’s being tested and why.
For example, we were in the process of testing the “optimize for clicks” setting on some of our client’s campaigns. Of course, the auditor saw this setting selected and immediately marked it with a big red “X” in the report.
We knew (and the client knew) why were testing this setting. But the auditor didn’t — and therefore he filled several paragraphs explaining why this isn’t an optimal setting in most cases.
3. What strategies and tactics have been tried in the past and haven’t worked?
Similarly, it would be helpful for the auditor to know what things we’ve tested in the past — and the results.
For this particular client, the auditor noted that we didn’t have any non-branded keywords live. Why? Because the nature of this client’s business is seasonal. And in the past, we had heavily tested non-branded keywords in peak season, with disappointing results each time.
This year, we decided (in consultation with the client) to ditch non-branded keywords during peak season and expand our Google Display Network efforts instead. The result: a major success!
But of the course, the auditor didn’t know any of this. So he marked another big X and wrote a few more paragraphs explaining why non-branded keywords are important.
4. What projects are slated for testing in the next quarter or two?
As with all our clients, we had plans in place for testing over the next few months, including device adjustments and audience tests.
But again, the auditor wasn’t aware of these plans. When he noted their absence, he assigned more red Xs and gave more lengthy explanations for why they should be done. But we knew that already.
Make your audit worth your time
Based on this experience, I can only conclude that audits can eat up a lot of hours. The client had to spend time arranging for the audit and reviewing the report. I had to spend time reviewing the report and responding to the findings. And I can only imagine how many hours the auditor spent auditing the accounts and writing his report.
Therefore, we can conclude that even a free audit comes at a cost. So if you decide to move forward with one, whether free or not, make it worth your time by ensuring that the auditor has answers to the questions outlined above. And suggest that they put more emphasis on making recommendations than recapping current status.
Hopefully, by putting these pieces in place, you’ll end up with an accurate and valuable final report — that doesn’t immediately get filed in the circular folder.
Some opinions expressed in this article may be those of a guest author and not necessarily Search Engine Land. Staff authors are listed here.
About The Author
Pauline Jakober is CEO of Group Twenty Seven, a boutique online advertising agency specializing in the Google AdWords and Bing Ads networks. As a Google AdWords Certified Partner, Jakober and her team practice cutting edge paid search strategy and management for clients across many industries.
Go to Source Author: Pauline Jakober
For more SEO, PPC & online marketing news visit https://news.scott.services
The post How to ensure your external PPC account audit isn’t a waste of time appeared first on Scott.Services Online Marketing News.
from NewsScottServices via News Scott Services on Inoreader https://news.scott.services/how-to-ensure-your-external-ppc-account-audit-isnt-a-waste-of-time/ from News Scott Services SEO News https://newsscottservices.tumblr.com/post/166949093943
0 notes