#15 Python Blogs To F
Explore tagged Tumblr posts
Text
Top 15 Best Computer Courses in High Demand.
This blog explores the 15 Best Computer Courses that can boost your career prospects. From web designing to Artificial Intelligence (AI), we'll delve into each course's significance. Whether you're a beginner or looking to advance your skills, this blog will help you make an informed decision.
The digital age has transformed the job market, making computer skills more essential than ever. Whether you're a tech enthusiast or looking to switch careers, choosing the right Computer Course is crucial. In this blog, you will learn about the 15 Best Computer Courses in the market right now. It will also offer valuable insights to assist you in making the best choice for your career.
Top 15 Computer Courses
<<<START YOUR COMPUTER GUIDE JOURNEY FREE WITH US HERE>>>
a) Web Designing
b) Data Science
c) Multimedia and Animation
d) Internet of Things (IoT)
e) Cyber Security Analysis
f) Big Data Analytics
g) Digital Marketing
h) Software Development
i) Computer Hardware Engineering and Networking
j) Mobile App Development
2) Choosing the right Computer Course for career goals
3) Five best Computer Science roles
4) Conclusion
Top 15 Computer courses
From Web Designing to Data Science, there are several courses available in the field of Computer Science. Let's take a look at the 15 Best Computers Courses here:
1) Web Designing
Web Designing is a dynamic and creative field that focuses on making visually appealing and functional websites. As more businesses and individuals increase their online presence, the demand for Web Designers will continue to grow.
Web Designers are responsible for crafting the layout, colour schemes, typography, and interactivity of websites. They use tools like Adobe Photoshop, Illustrator, and Dreamweaver to bring their designs to life.
To excel in Web Designing, you'll need a strong grasp of HTML, CSS, and JavaScript, which are the core languages of web development. Beyond technical skills, a keen eye for design and aesthetics is crucial. Web Designers often work on User Interface (UI) and User Experience (UX) design, ensuring that websites are visually appealing and easy to navigate.
Career options: Web Designer, Web Developer, UI/UX Designer, Graphic Designer, Freelance Web Designer.
2) Data Science
Data Science is a multidisciplinary field that revolves around collecting, processing, and analysing large and complex datasets to extract valuable insights and patterns. It combines elements of statistics, mathematics, programming, and Machine Learning to help organisations make better decisions based on data.
<<<GET YOUR FREE COMPUTER COURSE WITH US HERE>>>
Data Scientists are in high demand across various industries as businesses seek to harness the power of data-driven decision-making. To embark on a career in Data Science, a strong foundation in mathematics and statistics is necessary.
Besides, proficiency in programming languages like Python or R is also essential for data manipulation and analysis. Data Scientists should possess excellent analytical and problem-solving skills to derive meaningful insights from data.
Career options: Data Scientist, Data Analyst, Data Engineer, Machine Learning Engineer, Business Analyst.
3) Multimedia and Animation
Multimedia and Animation courses are a perfect blend of artistry and technical skills. These courses involve creating and manipulating images, sounds, videos, and other media elements to produce engaging and interactive content.
Multimedia and Animation professionals work in various industries, including entertainment, education, advertising, gaming, and more. To excel in this field, you'll need a basic knowledge of computer graphics and a strong sense of art and design.
Software tools like Adobe After Effects, Maya, and Blender are commonly used to bring creative visions to life. Multimedia and Animation Courses offer a wide scope of applications, and professionals in this field have the opportunity to unleash their creativity.
Career options: Animator, Multimedia Artist, Graphic Designer, Video Editor, Game Developer.
4) Internet of Things (IoT)
The Internet of Things is a cutting-edge field that connects physical objects, devices, and machines to the internet. This enables them to communicate and exchange data. IoT involves the use of sensors, microcontrollers, cloud computing, and networking technologies to create smart and interconnected systems.
This field holds huge potential for innovation and has applications in various domains, including healthcare, agriculture, transportation, and more. To excel in IoT, you'll need a basic understanding of electronics, programming, and networking.
IoT professionals are often tasked with creating intelligent systems that improve the quality of life and efficiency in various industries. With the continued growth of IoT, skilled professionals in this field are in high demand.
Career options: IoT Developer, IoT Engineer, IoT Architect, IoT Analyst, IoT Consultant.
5) Cyber Security Analysis plays a vital role in safeguarding data, systems, and networks from unauthorised access, attacks, and threats. Cyber Security Analysts employ a range of techniques and tools, including encryption, authentication, firewalls, and antivirus software, to protect sensitive information. With the increasing frequency of cyberattacks, the requirement for these professionals is on the rise.
To pursue a career in Cyber Security Analysis, you'll need a good knowledge of computer systems, networks, and programming. Cyber Security Analysts must think like hackers to identify vulnerabilities and prevent potential breaches. Staying updated with the recent Cyber Security threats and countermeasures is essential in this constantly evolving field.
Career options: Cyber Security Analyst, Cyber Security Engineer, Cyber Security Consultant, Ethical Hacker, Penetration Tester.
6) Big Data Analytics
Big Data Analytics involves the processing and analysis of large...... continue Reading...
1 note
·
View note
Text
general blog ask game
tagged by @ourlordapollo (:
tagging anyone who wants to do it
under a readmore bc it’s a little long
1. Why did you choose your url?
I was very into gravity falls at the time (especially loved the character Ford Pines) and once I thought of the pun I was basically obligated. it’s a pun on the last name but also on the “pining for the fjords” line in the dead parrot sketch from Monty Python’s flying circus
2. Any side blogs?
a handful, though a fair bit have fallen out of use
I’ll plug my art blog at @456art though
also have an oc aesthetic blog I sporadically reblog to
the rest are mostly for archiving/curating posts relevant to specific interests
3. How long have you been on tumblr?
def since 2012, maybe late 2011
4. Do you have a queue tag?
not really, I rarely queue posts unless I’m trying to post art at optimum times or am going to be offline for a while
5. Why did you start your blog in the first place?
a friend recommended it to me (bc I was really into doctor who and Sherlock fhfhghg)
6. Why did you choose your icon/pfp?
I like Ford Pines. I am trans. also I’ve had a wonky looking Ford as my icon for a long time and I don’t like changing icons/uns much bc I don’t want to confuse people
7. Why did you choose your header?
I don’t even have a header unless I do and forgot lmao
8. What is your post with the most notes?
afaik it’s a post I made that’s a photo of a weird store window in a local dying (now dead, press F) mall, hit about 20k which is Wild
though tbh my post about therapy being useful even if it can’t change a situation was still gaining traction when I turned off notifs, so it may be more by now.
9. How many mutuals do you have?
no idea lmao
10. How many followers?
1491 tho no idea how many are dead or bots
11. How many blogs do you follow?
1171
that’s a lot, I should probably clear out dead blogs
12. Have you ever made a shitpost?
absolutely
13. How often do you use tumblr?
too much. I have occasionally hit post limit, but only a couple of times
14. Have you ever had a fight/argument with another blog?
depends on how you count? I’ve def reblogged posts with counterarguments or had people argue with my posts, but I rarely go back and forth, and then usually only a single reply to a reply. I don’t get much engagement, and I only tend to write responses to good faith arguments or when I think the audience could benefit from seeing my reply. otherwise I just ignore or block when people try to start shit
15. How do you feel about “you need to reblog this” posts?
hate them. anything that tries to make you feel guilty if you don’t reblog is shit. if the content of the post itself is worth spreading I Will reblog around that sort of comment or make my own post. have unfollowed people for posting too many of these before and will again
milder pressure to reblog I may still reblog but I’ll tag it as reblog bait
16. Do you like tag games?
if I’m in the mood! I don’t always do them if tagged, but clearly I do them sometimes lol
I don’t super like tagging anyone in them though, don’t want anyone to feel pressured, but I don’t mind if people tag me so long as they aren’t bothered if I don’t respond
17. Do you like ask games?
depends, I do like ask lists that I can opt into by reblogging, but I’m not big on chain asks. I also often forget I reblogged an ask game list so please include the question itself and not just the number if you wanna ask me
18. Which of your mutuals do you think is tumblr famous?
idk about famous, but I do have a few mutuals who are pretty well known artists (:
19. Do you have a crush on any of your mutuals?
nah, though you guys are all very cool. I have a tendency to latch onto specific users sometimes but it’s generally a “wow this person is so awesome I want them to like me” not so much a crush, though I don’t generally act on this bc I can get hung up on getting their approval and make myself anxious about the opinion of someone I barely know, which is not great
1 note
·
View note
Photo

15 Python Blogs To F http://bit.ly/2GIINKh

15 Python Blogs To Follow Python has been on a successively rising curve ever since its inception. Owing to its easy-to-understand syntax and the innumerable functions that it can serve the rapidly growing popularity is quite justifiable. Python is one of those few languages that can integrate itself with several other fields just through the inclusion of specific libraries and sub-classes. Python has served as a link in advancing various fields of technology including artificial intelligence neural networks and full stack development. If youre seeking to acquire the skills to code masterfully within python or are just browsing to add new library functions to an already vast coding language youll find a neverending sea of blogs dedicated to the exact same purpose. To narrow your field of vision and to make a few choices easier weve compiled a list of 15 Python blogs that you absolutely need to follow. Full Stack Python If you have acquainted yourself with the basic syntax and functions of python and are looking to try your hand at developing/deploying python-powered web-based applications then Full Stack Python is the blog that youre looking for. With detailed tutorials to instruct anyone about the basics of developing a python application Full Stack Python is a python coders safe haven. PyBloggers PyBloggers encompasses everything related to python. With reviews and tutorials for various different libraries PyBloggers is the place to be to keep yourself updated with everything that is happening within the coding community all the while attaining knowledge about different libraries. If you wish to focus on a wider range of functions that Python can offer then PyBloggers would be the best pick. PyVideo If written reviews and tutorials arent sufficient for you then PyVideo would be a good alternative. With over 13000 videos indexed within the website detailing on the functionalities of different modules PyVideo is the ideal pick of choice if you prefer listening through lectures than reading through one. Python Tips If youre just looking for gaining a few tips to better your coding skill and experience then Python Tips will do the trick for you. Filled with several bite-sized tips related to python coding Python Tips is perfect for quickly improving upon your acquired coding skill set. Invent With Python Invent With Python is a blog that is hugely dedicated to the automation aspect of the coding language. If you want to witness what you can teach the machine and learn the art of it then Invent With Python is a good place to be. Through the blog posts youll be teaching bots to play complex games in no time. PyImagesearch PyImagesearch is a blog entirely dedicated to the image processing domain of python language. PyImagesearch educates you about what image processing is and then walks you through the basics of it and then when youre ready you can move towards the advanced level of the domain which also has detailed tutorials about the same. AstroPython Now heres a combination that you dont see every day - Python and Astronomy. AstroPython is dedicated entirely to packages like AplPy which are further dedicated to the field of astronomy. AstroPython helps anyone passionate enough about astronomy yet brilliant enough to integrate it with python. Neopythonic What better place to read about news within the python community than a blog curated by the author of python himself? Neopythonic is a rather entertaining blog ran by Guido Van Rossum. Anything quoted to be the Ramblings through technology politics culture and philosophy by the creator of the Python programming language is sure to be fun right? Python Module Of The Week Python Module of the Week is a unique blog that focuses specifically on one module and indexes detailed articles and tutorials for the same. The Module of the Week as the name refers changes every week making anyone who visits the blogs weekly a master at python coding. AnunRocks AnunRocks is a blog that is dedicated to using the django framework within python. Django is a web framework used to build full-fledged websites using clean and fluid code. So if youre someone who wants to learn Django or just improve your hand at it this will be a good place to start. Python Conquers the Universe Given its fancy name it just has to be comprised of something that is sure to seek your interest right? Well youre not wrong. Python Conquers the Universe is community driven blog where you can ask complex questions and get simple solutions for the same by experts who have a long history with python as a language. Mouse Vs. Python Given the fact that this blog is run by who exclusively uses python to earn a living youre sure to learn a few hacks or tricks to better your coding skill set. Mouse Vs. Python is not a blog that has a clever title it also a place to learn and enhance your coding skills in python. Python Insider Python Insider should already be your go-to site to visit to catch yourself up with news regarding core python development and information related to the same. Python Insider covers and reports on every new library function and updates that even remotely relates to python as a language. Planet Python Planet Python is your one-stop blog where you can find the latest posts from all the popular python related blogs within the community. If you dont wish to have your bookmarks brimming with python related blogs Planet Python is a good place to be where your favorite blogs come as one. Everyday Python Everyday Python is a unique python-related blog which aims to not just educate you about the different things you can do with python but also challenges you with projects to complete every day. Anyone who has diligently followed the everyday challenges is sure to hone his skills in python coding to level on par with a master coder. This brings our little discussion about using python related blogs to a close. We hope you have a fruitful coding session and click here for more tutorials.
0 notes
Text
Python Numbers
In this article, you'll find out about the various numbers utilized in Python, how to change over from one data type to the next, and the numerical tasks upheld in Python.
Number Data Type in Python
Python upholds integers, floating-point numbers and complex numbers. They are defined as int, buoy, and complex classes in Python.
Integers and floating points are isolated by the presence or nonappearance of a decimal point. For instance, 5 is an integer though 5.0 is a floating-point number.
Complex numbers are written in the structure, x + yj, where x is the genuine part and y is the imaginary part.
We can utilize the type() capacity to realize which class a variable or a worth has a place with and isinstance() capacity to check in the event that it has a place with a specific class.
How about we take a gander at a model:
a = 5
print(type(a))
print(type(5.0))
c = 5 + 3j
print(c + 3)
print(isinstance(c, complex))
At the point when we run the above program, we get the following yield:
<class 'int'>
<class 'float'>
(8+3j)
Valid
While integers can be of any length, a floating-point number is exact simply up to 15 decimal places (the sixteenth spot is inaccurate).
The numbers we manage each day are of the decimal (base 10) number framework. Be that as it may, software engineers (for the most part inserted developers) need to work with binary (base 2), hexadecimal (base 16) and octal (base 8) number frameworks.
In Python, we can address these numbers by suitably placing a prefix before that number. The following table records these prefixes.
Number System Prefix
Binary '0b' or '0B'
Octal '0o' or '0O'
Hexadecimal '0x' or '0X'
Here are a few models
# Output: 107
print(0b1101011)
# Output: 253 (251 + 2)
print(0xFB + 0b10)
# Output: 13
print(0o15)
At the point when you run the program, the yield will be:
107
253
13
Type Conversion
We can change over one type of number into another. This is otherwise called compulsion.
Activities like expansion, deduction pressure integer to coast verifiably (consequently), in the event that one of the operands is glide.
>>> 1 + 2.0
3.0
We can see over that 1 (integer) is forced into 1.0 (glide) for expansion and the outcome is additionally a floating point number.
We can likewise utilize worked in capacities like int(), buoy() and complex() to change over between types expressly. These capacities can even change over from strings.
>>> int(2.3)
2
>>> int(- 2.8)
- 2
>>> float(5)
5.0
>>> complex('3+5j')
(3+5j)
While converting from buoy to integer, the number gets shortened (decimal parts are eliminated).
Python Decimal
Python worked in class skim plays out certain estimations that may astonish us. We as a whole realize that the amount of 1.1 and 2.2 is 3.3, however Python appears to conflict.
>>> (1.1 + 2.2) == 3.3
What is happening?
Incidentally, floating-point numbers are carried out in PC equipment as binary divisions as the PC just understands binary (0 and 1). Because of this explanation, the majority of the decimal divisions we know, can't be precisely put away in our PC.
We should take a model. We can't address the division 1/3 as a decimal number. This will give 0.33333333... which is infinitely long, and we can just surmised it.
Incidentally, the decimal division 0.1 will bring about an infinitely long binary part of 0.000110011001100110011... and our PC just stores a finite number of it.
This will just rough 0.1 however never be equivalent. Thus, it is the impediment of our PC equipment and not a blunder in Python.
>>> 1.1 + 2.2
3.3000000000000003
To defeat this issue, we can utilize the decimal module that accompanies Python. While floating-point numbers have accuracy up to 15 decimal places, the decimal module has client settable exactness.
We should see the distinction:
import decimal
print(0.1)
print(decimal.Decimal(0.1))
Yield
0.1
0.1000000000000000055511151231257827021181583404541015625
This module is utilized when we need to do decimal computations as we learned in school.
It additionally saves importance. We realize 25.50 kg is more exact than 25.5 kg as it has two huge decimal spots contrasted with one.
from decimal import Decimal as D
print(D('1.1') + D('2.2'))
print(D('1.2') * D('2.50'))
Yield
3.3
3.000
Notice the trailing zeroes in the above model.
We may ask, why not carry out Decimal without fail, instead of buoy? The main explanation is effectiveness. Floating point activities are done should quicker than Decimal tasks.
When to utilize Decimal instead of buoy?
We for the most part utilize Decimal in the following cases.
At the point when we are making financial applications that need definite decimal portrayal.
At the point when we need to control the degree of accuracy required.
At the point when we need to carry out the thought of critical decimal spots.
Read our Latest Blog: Tuples in Python
Python Fractions
Python gives tasks involving partial numbers through its portions module.
A portion has a numerator and a denominator, the two of which are integers. This module has support for levelheaded number math.
We can make Fraction objects in different manners. How about we view them.
import portions
print(fractions.Fraction(1.5))
print(fractions.Fraction(5))
print(fractions.Fraction(1,3))
Yield
3/2
5
1/3
While creating Fraction from drift, we may get some surprising outcomes. This is because of the defective binary floating point number portrayal as examined in the past area.
Luckily, Fraction permits us to instantiate with string too. This is the favored choice when using decimal numbers.
import parts
# As buoy
# Output: 2476979795053773/2251799813685248
print(fractions.Fraction(1.1))
# As string
# Output: 11/10
print(fractions.Fraction('1.1'))
Yield
2476979795053773/2251799813685248
11/10
This data type upholds every fundamental activity. Here are a couple of models.
from parts import Fraction as F
print(F(1, 3) + F(1, 3))
print(1/F(5, 6))
print(F(- 3, 10) > 0)
print(F(- 3, 10) < 0)
Yield
2/3
6/5
Bogus
Valid
Python Mathematics
Python offers modules like math and random to do diverse mathematics like geometry, logarithms, likelihood and measurements, and so on
import math
print(math.pi)
print(math.cos(math.pi))
print(math.exp(10))
print(math.log10(1000))
print(math.sinh(1))
print(math.factorial(6))
Yield
3.141592653589793
- 1.0
22026.465794806718
3.0
1.1752011936438014
720
Here is the full rundown of capacities and properties accessible in the Python math module.
import random
print(random.randrange(10, 20))
x = ['a', 'b', 'c', 'd', 'e']
# Get random decision
print(random.choice(x))
# Shuffle x
random.shuffle(x)
# Print the rearranged x
print(x)
# Print random component
print(random.random())
At the point when we run the above program we get the yield as follows.(Values might be diverse because of the random conduct)
18
e
['c', 'e', 'd', 'b', 'a']
0.5682821194654443
Read Full Article: Python Numbers
0 notes
Text
My Blog 2
The following python code was written to add the dataset from GAPMINDER codebook to the python and do the data analysis.
import pandas as pd
import numpy # any additional libraries would be imported here
# Import the entire data set to memory from the GAPMINDER CodeBook .csv file data = pd.read_csv('gapminder.csv', low_memory=False)
#upper-case all DataFrame column names - place afer code for loading data aboave #data.columns = map(str.upper, data.columns)
# bug fix for display formats to avoid run time errors - put after code for loading data above pd.set_option('display.float_format', lambda x:'%f'%x) print('Total Number of Samples in the Gapminder Codebook data file') print (len(data)) #number of observations (rows) print (len(data.columns)) # number of variables (columns)
#setting variables you will be working with to numeric data1=pd.DataFrame(data,columns=['country','alcconsumption','suicideper100th'])
#pd.to_numeric(s) data1['alcconsumption']=pd.to_numeric(data1['alcconsumption'],errors='coerce') data1['alcconsumption']=data1['alcconsumption'].sort_values(ascending=True) data1['suicideper100th']=pd.to_numeric(data1['suicideper100th'],errors='coerce') data1['suicideper100th']=data1['suicideper100th'].sort_values(ascending=True)
data1.sort_values(["alcconsumption"], axis=0,ascending=True, inplace=True)
#counts and percentages (i.e. frequency distributions) for each variable
print('Printing the Country List') t1=data1['country']
print ('Printing the 2008 alcohol consumption per adult (age 15+), litres Recorded and estimated average alcohol consumption') print('adult (15+) percapita consumption in litres pure alcohol for each country') c1 = data1['alcconsumption'].value_counts(sort=False) print (c1)
p1 = data1['alcconsumption'].value_counts(sort=True, normalize=True) print (p1)
#sFinding the countries which have per capita consumption more the 10 lts sub1=data1[(data1['alcconsumption']>=10)] sub2 = sub1.copy()
# frequency distritions on new sub2 data frame #print 'counts for AGE' c5 = sub2['alcconsumption'].value_counts(sort=False) print(c5)
from numpy import mean #Caluculating the maximum adult (15+) percapita consumption in litres pure alcohol from the data in whole of data set max_alco=max(data1['alcconsumption']) print('Maximum Alcohol Consumption from Data set') print(max_alco)
avg_alco=mean(data1['alcconsumption']) print('Average Alcohol Consumption from Data set') print(avg_alco)
The fooling questions were part of the research:
1/ to find the maximum per capita alcohol consumption in the data set it was found to be 23..01.
2/ Average alcohol consumption from the data set.
I was calculated as 6.689411
0 notes
Note
If you dont mind me asking, how much does owning a snake cost? I really want to get a ball python but im not sure how much to save? Between the vet and enclosure id really appreciate a rough estimate. Thank you!!!
Okay! *cracks knuckles* The BEST tumblr resource you can find is @wheremyscalesslither ‘s snake resources, I suggest you find them (Click on Heather’s blog, then its right in the description) and thoroughly read them. At least twice. xD (But seriously… do it)
Ball pythons are kinda like the opposite of a cat or dog - the snake doesn’t cost too much itself (Aren’t cats like… at least 300? or is it 3,000? I forget xD), but the enclosure costs a lot more. I’d also like to let you know that BP’s live for 40-50 years, so they’ll be a pet for life.
Snake: Depending on what morph you want, you can get a BP for 60$ (or free if you know where to look - breeders often give away their extra normals) or anywhere up to a couple thousand dollars. I got my girl for 150$, but you can get morphs other than a normal for 75$ and up.
Enclosure: BPs are actually best kept in sterilite tubs, large ones. I got my large tub at Target, for around 20$ I think. It needs to be large enough for the snake to stretch out, so about 4 feet of length on one side will do you good!
UTH: 20$ (Here’s mine)RHP or CHE: My RHP is 97$, and I love it, because we didn’t have to cut a hole in the plastic tub lid so we could put it in (we were worried a lot about if the plastic would melt, ect, and then when we actually got some of the wire mesh to glue on the hole it kept popping up. I’d say, if you get a tub, get an RHP. It’s like the compensation cost of a tub - you don’t have to pay 140$ for a good sized tank but the heating costs a bit more. Tanks don’t hold humidity as well anyway xD) CHE’s run at about 10-12$. Don’t buy a red light, they’re bad for your snek :)Temp Probes: You can buy a combo thermo/hydrogometer at Walmart for around 10$, You’ll need one of those and then another one if your thermostat doesn’t have a temp probe itself (Mine does!! Which is great xD)Thermostat/Thermoregulator: This is IMPORTANT!!!! You need two: One for your UTH, and another for the RHP. This is also hard, because there’s two types: Off/On and % based. Off/on is cheaper, running at the lowest around 20$, but Off/On works from extremes (Literally, only off and on) and can wear out your UTH which can put your snake in danger of getting burnt if it breaks. The % based controls the% of electricity your UTH runs at, and will have it running at, say, 75% power which keeps the temperature EXACTLY where you want it. These also happen to run anywhere from 150$ (This seems too low, tbh, I have to double check my sources) to 450$.Temp heat gun: 20$ (mine)Fake plants: anywhere from 5-20$ at Michael’sHides: You can make your own out of leftover food take out containers (check out Noodles or LeeAnnChin xD), buy a rubber dog dish and cut out holes (5? $??? idk), or just buy one at the store for around 15$. (Don’t get those half log things with two holes tho Dx they don’t give a good enough hiding space)Climbing materials: You’ll need to buy a couple wooden things for your snake to climb on. I got a large dowel at Michales (5$) which I cut to fit in the cage, and ended up with 3 climbing dowels for the snake. I also got a wood platform thing (3$?), which I screwed into the side and created a second level with. You can also buy wooden bird perches (I’m not even kidding - they run at around 3-5$ instead of the snake perches which run at 15-20$, and they’re just as good. And super easy to install)Substrate: You can get fancy wood chips (5-10$??) or whatnot or just use paper towels. You’ll have to spray the enclosure a bit more with paper towels, but, hey, it’s more time with your snek and that’s a good thing :)Waterdish: You can buy one for 10$ ish, use a dog bowl which is more of 5$, or just use a serving dish you have lying around at home you never use (LOL guess which one I did xD)Spray bottle (x2): For misting the noodle. You can pick one up for 5$ (x2, one for water, one for disinfectant) at Home Depot or Target. Disinfectant: I use one part white vinegar and one part water. You either have this at home or it’s a couple bucks for a giant bottle :) (I also have the regular disinfectant wipes for myself to use after handling the snake and wiping down my phone, which are? 3-5$?)Food: A younger snake eats about once a week, which means you’ll be spending 2-5$ on food each week. If you get a snake that eats live, be prepared to feed them live rats, because not every snake switches over well. You’ll need to find a store (or breed your own which is a whole different adventure), and then the rats usually cost around 5$ each. If you can switch your noodle to frozen/thawed food (you can buy it online), it’s can be anywhere from 1-2$ per rat.Ratto Tongs: If you feed your noodle f/t, you might need to perform a “zombie dance” with the rat. I don’t know about you, but having a pair of tongs makes everything better and minimizes the snake accidentally thinking your hand is the rat xD They’re just a couple bucks :)
I personally haven’t been to the vet yet! My noodle’s been happy and healthy since the day I got her from the breeders, so I can’t give you an estimate of a vet bill. Sorry ^^;I… think that’s everything. (if I’ve forgotten something, followers, please let me know!) Of course, you’ll need shipping on all the supplies too…Lighter side/being thrifty on literally everything: 350$ + a ratto/week and shippingHeavier side: 560$ + a ratto/week and shippingSo, I’d probably save around 600$ for your snake. If you have extra by the time you’re all ready to buy your noodle, you can put it toward getting a pretty morph or stash it for future rat and vet bills!
Good luck, and don’t forget to look at @wheremyscalesslither ‘s snake resources!
159 notes
·
View notes
Note
1-42 for the musical asks
1. What was the first musical you saw?Spamalot. 2. What musical got you really into theatre?SPAMALOT.3. Who was your first Broadway crush?Christian Borle. What a babe. 4. Name three of your current Broadway crushes.Christian Borle, Andrew Rannells, Stephanie J. Block.5. Name four of your dream roles.Mendel Weisenbachfeld (Falsettos), Adam Hochberg (An American in Paris), Patsy (Spamalot), Trina Weisenbachfeld (Falsettos).6. Favourite off-broadway show:It's not actually a play but I really love Dr. Horrible's Sing Along Blog and it should be a play version. 7. Favourite cast recording.Honestly? The whole March of the Falsettos album was a blessing and underrated. 8. 2013 Tony opening number or 2016 Tony opening number?2013. Neil Patrick Harris owns my heart. 9. Favourite show currently on Broadway.Falsettos. 10. A musical that closed and you’re still bitter about. Rant a bit.Honestly I don't have an opinion on this. 11. Best stage to screen adaptation??????????????????? I have no opinion on this either. 12. Worst stage to screen adaptation? ?????? Again, no opinion. 13. Favourite #ham4ham?I don't like anything that has to do with Hamilton. 14. A musical you would love to see produced by Deaf West?Every single one ever. 15. If you could revive any musical, which one would it be and who would you cast in it?Spamalot. And all the same people. Please. Eric Idle too. Wasn't Eric Idle in Monty Python? Yeah he should be in Spamalot. 16. If you could go to a concert at the 54 below, who’s would it be?Brandon Uranowitz. 17. Do you watch broadway.com vlogs? Which one is your favourite?Something Rotten!18. Make a broadway related confession.One summer I'd cry myself to sleep every night because I didn't get to hug Christian Borle. 19. What do musicals mean to you?2 hours of a good time. 20. Express some love for understudies and swings!I don't have anyone specific but like understudies are soOOOO important and I love them all. 21. Best Disney musical:Newsies. 22. Which Disney movie should be made into a musical?Fox and the Hound. 23. Which musical fandom has the funniest memes?Falsettos. 24. Name a character from a musical you would sort into your Hogwarts house.Adam Hochberg is definitely a Slytherin. 25. Name a Broadway star you would sort into your Hogwarts house.Christian Borle. Idk why. 26. Best on stage chemistry?Christian Borle and Andrew Rannells. 27. A Broadway duo you love.^^^^^^28. What book, tv show, movie, biography, video game, etc. should be turned into a musical?DR. HORRIBLE'S SING ALONG BLOG. 29. If you could make a jukebox musical, what artist or genre would you pick?The Beatles. 30. Favourite role played by _________________?CHRISTIAN BORLE- WILLIAM SHAKESPEARE 31. What musical has made you cry the most?Falsettos. 32. What musical has made you laugh the most?Falsettos. 33. Current showtune stuck in you head:Jason's Therapy. 34. A musical that has left you thinking about life for a long time or deeply inspired you.Falsettos. 35. If you could perform any ensemble number , which one would you pick?I can't pick just one I'm sorry. 36. Name a musical you didn’t like at first but ended up loving.F a l s e t t o s. 37. What are some customes you’d love to try on?Any of Whizzer Brown's. Good fashion. 38. Favourite dance break.The entirety of Newsies. 39. Favourite Starkid musical:???? Yeah. 40. What’s a musical more people should know about?Mine when I make one lmao. 41. What are some lines from musicals you really like?"MY FATHER'S A HOMO, MY MOTHER'S NOT THRILLED AT ALL.""FATHER HOMO; WHAT ABOUT CHROMOSOMES? DO THEY CARRY? WILL THEY CARRY? WHO'S THE HOMO HOW?""Why don't ya feel alright for the rest of your life?"42. Name a Tony performance you rewatch and rewatch.A Day In Falsettoland.
2 notes
·
View notes
Text
Mars Craters - Data Aggregation and Frequency Distribution
Introduction to blog
Purpose of this blog is to post my assignment work related to the course “Data Management and Visualization” offered by Wesleyan University through Coursera. This post is for Week 2 assignment which is broadly focused towards writing program and performing data analysis targeting frequency distribution and aggregation as applicable.
Area of research and Data processing
Area of research selected in Week 1 was Mars Crater’s study. Programming was done in Python and code is published in next section under “Python Code” but below is explanation of steps taken towards data aggregation.
1. Loaded initial raw data to “pandas” data frame.
2. Based on hypothesis identified during week 1 assignment below variables were chosen and aggregated.
a. Crater size – New column inserted in data frame to categorize craters in multiples of 10. Example Cat 1 = size <10, Cat 2 = 10 < size > 20 and so on.
b. Morphology 1 – Categories were restricted to first 5 letters of significance based on nomenclature.
c. Morphology 2 – Categories were restricted to hummocky and Smooth type. Other secondary classification was ignored as they only depict patterns.
d. Number of Layers. Even though this is corelated with Morphology 1. We considered this data as this variable give more classification upto Layer 5, whereas Morphology 1 considers 3 and above as multiple layers.
3. Frequency distribution data generated using code depicted in Course. Findings are summarized in Inference section of this blog.
Python Code
# -*- coding: utf-8 -*-
"""
Created on Mon May 25 15:33:27 2020
@author: Chandrakant Padhee
"""
#BELOW CODES IMPORT NECESSARY LIBRARIES - PANDAS AND NUMPY
import pandas #importing pandas library
import numpy #importing numpy library
#BUG FIX TO REMOVE RUNTIME ERROR
pandas.set_option('display.float_format',lambda x:'%f'%x)
#READING DATA FROM CSV SOURCE FILE AND IMPORT THEM TO DATAFRAME data_mars
data_mars = pandas.read_csv('marscrater_pds.csv',low_memory=False)
data_mars.columns = map(str.upper,data_mars.columns)
#BELOW CODE ADDS CATEGORIZATION OF CRATER SIZE IN MULTIPLES OF 10KM.
#EXAMPLE 1 REPRESENTS CRATER SIZE LESS THAN 10KM AND 2 REPRESENTS SIZE BETWEEN 10KM to 20KM AND SO ON.
data_mars['Crater_Size_Cat'] = data_mars['DIAM_CIRCLE_IMAGE']//10 + 1
#BELOW CODE MODIFIES MMORPHOLOGY_EJECTA_2 DATA TO HUMMOCKY AND SMOOTH
data_mars['Morph_2'] = data_mars['MORPHOLOGY_EJECTA_2'].str[:2]
#BELOW CODE MODIFIES MMORPHOLOGY_EJECTA_1 DATA TO RESTRICT TO SIMPLE LAYERS NOMENCLATURE
data_mars['Morph_1'] = data_mars['MORPHOLOGY_EJECTA_1'].str[:5]
#AS TARGET IS TO STUDY MORPHOLOGICAL DATA FROM GLOBAL DATASET,
#WE CREATE NEW DATA FRAME REMOVING ALL THE ROWS HAVING "NUMBER_LAYERS" = 0
#STORE NEW DATA UNDER NEW DATA FRAME data_mars_mod
data_mars_mod = data_mars[data_mars.NUMBER_LAYERS!= 0]
#BELOW CODE IS TO CALCULATE FREQUENCY DISTRIBUTION OF "NUMBER OF LAYERS" IN TERMS OF COUNTS AND PERCENTAGES
c1 = data_mars_mod["NUMBER_LAYERS"].value_counts(sort=False)
p1 = data_mars_mod["NUMBER_LAYERS"].value_counts(sort=False, normalize=True)*100
#BELOW CODE IS TO CALCULATE FREQUENCY DISTRIBUTION OF "MORPHOLOGY CHARECTERISTICS 1" IN TERMS OF COUNTS AND PERCENTAGES
c2 = data_mars_mod["Morph_1"].value_counts(sort=False)
p2 = data_mars_mod["Morph_1"].value_counts(sort=False, normalize=True)*100
#BELOW CODE IS TO CALCULATE FREQUENCY DISTRIBUTION OF "MORPHOLOGY CHARECTERISTICS 2" IN TERMS OF COUNTS AND PERCENTAGES
c3 = data_mars_mod["Morph_2"].value_counts(sort=False)
p3 = data_mars_mod["Morph_2"].value_counts(sort=False, normalize=True)*100
#BELOW CODE IS TO CALCULATE FREQUENCY DISTRIBUTION OF "AGGREGATED CRATER SIZES" IN TERMS OF COUNTS AND PERCENTAGES
c4 = data_mars_mod["Crater_Size_Cat"].value_counts(sort=False)
p4 = data_mars_mod["Crater_Size_Cat"].value_counts(sort=False, normalize=True)*100
#BELOW CODES PRINTS OUT THE OUTPUT DISCTRIBUTION OF NUMBER OF LAYERS AND EJECTA PROFILES
print('Number of counts of Craters with different number of layers are as below')
print(c1)
print('Percentages of Craters with different number of layers are as below ')
print(p1)
print('Number of counts with different Morphology ejecta 1 charecteristics for craters are as below - Ex SLERS (Single Layer Ejecta / Rampant/Circular')
print(c2)
print('Percentages of different Morphology ejecta 1 charecteristics for craters are as below - Ex SLERS (Single Layer Ejecta / Rampant/Circular' )
print(p2)
print('Number of counts with different Morphology ejecta 2 charecteristics for craters are as below - H = Hummocky and S = Smooth')
print(c3)
print('Number of counts with different Morphology ejecta 2 charecteristics for craters are as below - H = Hummocky and S = Smooth')
print(p3)
print('Counts of Crater size in multiples of 10KM are as below')
print(c4)
print('Percentages of Crater size in multiples of 10KM are as below')
print(p4)
Output Frequency Tables
VARIABLE 1 – LAYERS OF CRATERS
Number of counts of Craters with different number of layers are as below
1 15467
2 3435
3 739
4 85
5 5
Percentages of Craters with different number of layers are as below
1 78.389337
2 17.409153
3 3.745375
4 0.430794
5 0.025341
VARIABLE 2 – MORPHOLOGY_EJECTA_1
Number of counts with different Morphology ejecta 1 characteristics for craters are as below - Ex SLERS (Single Layer Ejecta / Rampant/Circular)
SLErS 1
MLERC 24
SLERC 1290
DLSPC 1
DLEPC 505
Rd/SP 1
RD/SL 1
Rd/SL 1298
SLERS 5130
MLERS 492
MLEPS 43
Rd/DL 637
Rd/ML 240
SLEPS 5053
DLEPS 633
DLERS 1244
SLEPC 2678
DLERC 393
MLEPC 22
SLEPd 44
DLEPd 1
Percentages of different Morphology ejecta 1 characteristics for craters are as below - Ex SLERS (Single Layer Ejecta / Rampant/Circular)
SLErS 0.005068
MLERC 0.121636
SLERC 6.537935
DLSPC 0.005068
DLEPC 2.559424
Rd/SP 0.005068
RD/SL 0.005068
Rd/SL 6.578481
SLERS 25.999696
MLERS 2.493538
MLEPS 0.217931
Rd/DL 3.228422
Rd/ML 1.216360
SLEPS 25.609447
DLEPS 3.208150
DLERS 6.304800
SLEPC 13.572551
DLERC 1.991790
MLEPC 0.111500
SLEPd 0.222999
DLEPd 0.005068
VARIABLE 3 – MORPHOLOGY_EJECTA_2
Number of counts with different Morphology ejecta 2 characteristics for craters are as below - H = Hummocky and S = Smooth
Sm 5561
Hu 13912
HU 3
Number of counts with different Morphology ejecta 2 characteristics for craters are as below - H = Hummocky and S = Smooth
Sm 28.184076
Hu 70.508337
HU 0.015205
VARIABLE 4: CRATER SIZE (DIAMETER) IN MULTIPLES OF 10KM
Counts of Crater size in multiples of 10KM are as below
9.000000 1
4.000000 172
3.000000 618
2.000000 3404
1.000000 15463
6.000000 15
12.000000 1
8.000000 5
5.000000 46
7.000000 6
Percentages of Crater size in multiples of 10KM are as below
9.000000 0.005068
4.000000 0.871725
3.000000 3.132127
2.000000 17.252040
1.000000 78.369064
6.000000 0.076023
12.000000 0.005068
8.000000 0.025341
5.000000 0.233136
7.000000 0.030409
Inference:
Frequency distribution from above tables were generated after segregating data for which morphology information was available, hence rest of the rows were deleted in data frame. Above distribution reveals below details:
1. Most of the craters from segregated data are having One Layer (78%) or Two layers (17%) Rest small portion is distributed to Three, Four- and Five-layers Craters
2. This is also supplemented by Morphology_Ejecta_1 data but additional information received is most of craters under Single and Double layers have equal representation from Pancake Circular, Pancake Sinusal and Rampant Sinusal categories.
3. Morphology_Ejecta_2 reveal ejecta patters are mostly hummock type as compared to smooth profiles with 70:30 proportion
4. Lastly as far as size of craters is considered, most of them fall under less than 10KM category with 78% share.
Above information closely relates to correlation between layer dependent morphology vs crater size which was initial hypothesis. But this can only be proven after further analysis of data.
Summary
Purpose of initial post is hereby covered considering below points.
Writing programming code: Python was used to write code and same is presented under section “Python Code”
Display of Variables frequency table: This is covered under section “Output Frequency Table”
Description of frequency distribution: This is covered in “Inference Section”
0 notes
Quote
Dev.to is Fast It is insane. Dev.to is capital-F Fast. It's so fast that it feels like a local web app, not one on the internet. I was exposed to Dev.to a long time ago but it had always been one-off articles, so the speed is less noticeable; when I actually went through to sign up for an account, edit my profile, etc. I noticed how ridiculously fast it is. My immediate thought was, what is Dev.to doing right that pretty much every other site on the internet right now is doing wrong? How can they be so fast? By comparison, it makes the sites of even internet giants (Reddit, Github, and even Hacker News!) feel like turtles. And it's not like Dev.to is small -- Alexa reports it at around the 8000~ range in global ranking (compared to HackerNoon at 4000~ or npmjs at 6000~ or so), and there's tons of content! Digging into it, it's a traditional Rails site with InstantClick I first started by viewing source and opening Chrome Dev Tools; and noticed that every page had full HTML rendered so it didn't seem like a React (or any other frontend framework) site. Then I noticed that whenever I hover any links, it sends a request to preload pages. I knew of a JS library called Turbolinks that did that, so I thought that might be it. Then I noticed the Dev.to site code is open source, so I checked it out and found its tech stack page and saw that InstantClick was the secret sauce. It works exactly as I observed -- it preloads the HTML output of hyperlinks as you hover them, and replaces the current page's HTML when you click on them, and adds some history.pushState() for good measure. This is the model content-based web apps should strive for Let's face it, big orgs/startups that manage content-based web sites with a modest amount of functionality/business logic almost always default to React for a frontend framework. Organizationally this makes sense, you have a big dev team working on the product, and a component-based frontend architecture is the most sensible way to work together on a big frontend codebase. But is it the best UX? One doesn't need to venture far from your typical popular media sites (Techcrunch, ESPN, etc.) to see them doing React or React-like behaviors -- your typical server-side-rendering of the first page, loading a JS bundle, hydrating the app on client JS framework, and then the next pages load in API calls, etc. In contrast, Dev.to simply prints the output HTML of the page from the server at each server endpoint, and it's only boosted by InstantClick preloading the page when you hover the links (and subsequently the new HTML replacing the current HTML to prevent a full browser page navigation). I argue that this is the model content-based sites, content-based web apps, including blogs, e-commerce web sites, etc. should be using in their development. Why does this make sense? We, as the whole tech industry, spent a decade (roughly from 2007 to 2017 or so) optimizing every part of the server side of web apps -- everything ranging from database performance (techniques like read replicas, sharding, and newer and newer generations of DB engines), caching (memcache, Redis, local web server caching), asynchronous work queues for writing data, loadbalancers and infinitely scaling web servers (with the advent of AWS), regions/availability zones and edge CDNs to be closer to users. Once we have everything, we really proceeded to nullify all that investment in server side performance optimization by putting a heavyweight frontend framework like React on the client side to work with those servers. Now to load a web site you need to load a 1mb JS bundle, have a spinner on the first page load of your site or web app, and let React handle this whole virtual DOM and calculate how to render everything at every point in time when anything changes states. We're right back to where we were in the 2000s when web sites/web apps were slow, when it was because we didn't have all those server side technologies and consumer internet connections were slow. Web sites take a few seconds to load now, just as it did in 2007; but for entirely different reasons. Isn't it a little sad for the end user, with all the advancement of technologies, the user experience stayed roughly the same (or worse)? But, component-based frontend development! Honestly, as someone who's not a fan of React, the only valid argument I agree with for doing a React frontend (and this is a huge one) is the organizational reason. In my 15 years of working in software teams, component-based frontend is really the most sensible way for a big team to work on a frontend codebase together and scale a frontend codebase; there's no way around it. Traditional MVC frameworks with templated views make it entirely too difficult to have any reasonable re-use of frontend code without some frankenstein monstrosity; and scaling a jquery frontend inevitably leads to spaghetti code. I believe it's one of the primary reasons React has taken off in tech startups; it's the best way to grow a dev team to work on frontend seriously. Maybe a new development paradigm is in order? How can we take advantage of server side generated HTML outputs in a web app, but also take advantage of component-based frontend development? Here's something I came up with literally just tonight, perhaps this could inspire you to try something similar? Frontend team develops in component-based framework of choice (React, Vue, Svelte, etc.). Frontend toolchain to build static HTML of each page based on components (This build happens both locally during development and on CI tool as well) -- this can continue to use Webpack, Rollup, or whatever folks prefer. Each built page takes in set of variables (similar to how templated views work in traditional MVC web apps). Frontend can mock this backend-provided data similar to how it mocks API responses for React frontends during development. Backend uses a traditional MVC architecture, with any language (Ruby, PHP, Python, Nodejs) and framework (Rails, Laravel, Django, CodeIgniter) of choice. Backend renders views by loading those prebuilt static HTML by page name, and passes in variables just as it does for templated views. Feature development workflow: Backend and frontend team members work on same branch for a feature, each check in their changes independently, when frontend checks in the components source code, CI tool runs the build toolchain and the generated static HTML for each view is then committed into repo as well. Frontend can continue to organize code in a similar React monorepo as they do right now, with reusable components and everything. Don't get me wrong, your React components can still react to clicks, moves, etc. and change states locally, you are free to still make AJAX requests in many valid use cases of it. All of that doesn't change. But page navigation should be left to and InstantClick and the server to generate the next page, when it's an entire page change. Your thoughts? I haven't given this work flow any more thoughts than it took to just write it all down, so curious to hear opinions of others.
http://damianfallon.blogspot.com/2020/04/devto-is-perfect-demonstration-of-how.html
0 notes
Photo

New Post has been published on https://www.cryptomoonity.com/october-2018-github-development-bounty-winners/
October 2018: GitHub Development Bounty Winners
October 2018: GitHub Development Bounty Winners
Month of October has passed and so did our October’s Github bounty program. Thanks to all who participated in October and we hope you will continue to do so in November, and don’t forget it is never to late to join — what better way to learn and earn some ARK.
Don’t forget we are running a special ARK Core Testathon from 1st November 2018 till 8th November 2018 with increased rewards in the week on ARK Core Repo : https://blog.ark.io/ark-core-testathon-c965c07373e3
We also have numerous official SDK repos covering majority of the programming languages, so if you’d like to get started with blockchain development, we have you covered and don’t forget to give our PR guidelines a read.
Guidelines: https://docs.ark.io/guidebook/contribution-guidelines/
SDK: https://docs.ark.io/api/sdk/
SDK Clients: https://docs.ark.io/api/sdk/clients/guidelines.html
SDK Crypto: https://docs.ark.io/api/sdk/cryptography/guidelines.html
November 1st — November 30th rewards will be:
1st : $2,000 USD
2nd: $1,500 USD
3rd: $1,000 USD
4th & 5th: $250 USD / each
6th–8th : $50 / each
(+ merged PR’s — $10 and for merged Core v2 PRs $20 each)
Missed what this is all about and you want to get involved? Read : “ARK GitHub Development Program Bounty”.
September 2018 Winners:
— — — — — — — — — — — — — — — — — — — — — — — — — — — — 1st place: 2,000 USD Github User: Dated — — — — — — — — — — — — — — — — — — — — — — — — — — — —
refactor(misc): further numerical input validation 2018–10–31 07:27:19 — core#227181272
fix: adjust search tests 2018–10–30 10:39:26 — explorer#226867731
fix: add associated username to wallet api response 2018–10–30 09:10:35 — core#226829760
chore: update nightwatch 2018–10–29 14:42:52 — explorer#226576736
fix: move timeout in recipient test 2018–10–28 03:30:28 — explorer#226330442
chore: tail test output only on test failure 2018–10–26 12:23:54 — core#226078109
chore(core-database-postgres): add better function description 2018–10–26 10:47:39 — core#226049135
chore: update readmes 2018–10–25 02:09:29 — docs#225379700
chore: adjust help text 2018–10–24 11:13:52 — core-commander#224521249
fix: add ‘update-notifier’ to base dependencies 2018–10–24 11:13:29 — core-commander#224703294
chore: setup CircleCI 2018–10–24 10:07:41 — docs#225338868
fix: move search-tip styles to tooltips.css 2018–10–24 07:27:12 — explorer#225280456
refactor: search tooltip arrow placement 2018–10–23 14:41:56 — explorer#225062631
feat: add dark-themed tooltips 2018–10–23 13:56:41 — explorer#225041864
fix: remove double nightMode dispatch 2018–10–23 13:14:52 — explorer#225026071
chore: add github templates 2018–10–23 09:02:27 — mobile-wallet#224570350
chore: hide test:coverage build progress 2018–10–23 08:53:18 — mobile-wallet#224897132
fix: use title from network config in page title 2018–10–23 06:55:08 — explorer#224696736
fix: use network token in voter count tooltip 2018–10–23 06:49:29 — explorer#224569455
fix: add error handling around WalletService.vote() 2018–10–23 02:25:41 — explorer#224735099
chore: setup CircleCI 2018–10–21 15:16:26 — mobile-wallet#224545099
fix: check if logfile exists before tailing 2018–10–21 06:03:37 — core-commander#224506656
refactor: distribute versions accross whole line 2018–10–20 16:53:26 — core-commander#224484853
Adds ‘cordova-allow-background-audio’ package 2018–10–20 11:53:05 — mobile-wallet#219195446
fix: add ‘apt-transport-https’ to base dependencies 2018–10–20 04:08:23 — core-commander#224433812
fix: add ‘press_to_continue’ to main menu logs 2018–10–20 04:08:05 — core-commander#224436042
refactor: distribute states accross whole line 2018–10–20 04:07:50 — core-commander#224437809
removes double case in menu option parsing 2018–10–20 04:07:31 — core-commander#224427345
fix: calculate correct logger padding 2018–10–20 04:05:56 — core#224441904
chore: store test results 2018–10–19 11:46:17 — python-client#224211246
chore: add codecov to circleci config and misc fixes 2018–10–19 09:36:53 — elixir-crypto#224203561
chore: add codecov to circleci config and misc fixes 2018–10–19 09:36:36 — elixir-client#224205786
chore: store test results 2018–10–19 09:36:07 — python-crypto#224211237
refactor: renames main module to ‘client’ 2018–10–19 09:34:46 — python-client#224020835
fix: use Gemfile instead of Gemfile.lock as cache key 2018–10–18 17:34:03 — ruby-crypto#224032577
fix: use Gemfile instead of Gemfile.lock as cache key 2018–10–18 17:33:26 — ruby-client#224032566
chore: generate test report 2018–10–18 13:48:45 — java-client#223962450
chore: set semantically correct working directory 2018–10–17 12:11:08 — java-crypto#223269943
should fix case-insensitive import collision 2018–10–17 09:04:06 — go-crypto#223521572
chore: set semantically correct working directory 2018–10–17 06:24:13 — ruby-crypto#223269833
sets correct working dir for test-node-stable 2018–10–16 09:10:30 — explorer#223168694
adds ‘click’ trigger to converted tx amount tooltip 2018–10–16 08:12:18 — explorer#222938579
adjusts wallet-blocks page titles 2018–10–13 00:47:23 — explorer#222603314
chore: update pull request template 2018–10–09 11:12:30 — core-commander#219705491
fix: sets tooltip color and better text if forgingStatus is 5 2018–10–03 02:38:16 — explorer#219786536
fix/refactor: shows initial value of zero for the delegate count 2018–10–02 15:43:00 — explorer#219735496
hardens transactions recipient test 2018–10–01 07:27:23 — explorer#219273728
increases timeout on block test 2018–10–01 06:07:49 — explorer#219266462
adds detail section to wallet/blocks 2018–10–01 05:40:54 — explorer#219235121
— — — — — — — — — — — — — — — — — — — — — — — — — — — — 2nd place: 1,500 USD Github User: ItsANameToo — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Special thanks to ItsANameToo for developing the Swift Crypto package (30+ commits not listed in this blog post, can be viewed here : https://github.com/ArkEcosystem/swift-crypto/commits/develop )
fix: don’t show additional arktoshis in supply 2018–10–12 06:07:04 — explorer#222343943
— — — — — — — — — — — — — — — — — — — — — — — — — — — — 3rd place: 1,000 USD Github User: kalgoop — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Bug: Fix delegate search 2018–10–30 15:12:49 — mobile-wallet#226369491
Bugfix: remove Vote for Watch-Only 2018–10–30 14:57:28 — mobile-wallet#226825516
Fixed a few styling errors 2018–10–29 09:22:24 — mobile-wallet#226317256
Bugfix: update PIN-code Modal 2018–10–26 22:54:12 — mobile-wallet#226209507
BugFix: countdown for PIN 2018–10–26 21:06:18 — mobile-wallet#226106406
Vanity — keyword in starting 2018–10–26 10:48:33 — javascript-cli#223683939
Updated an operator acc to convention 2018–10–26 10:07:55 — javascript-cli#223677813
Fixed Typo 2018–10–20 12:07:18 — mobile-wallet#223940314
Always display 2 decimal digit 2018–10–20 11:57:06 — mobile-wallet#223703584
Fixed Styling to suit the Standard 2018–10–18 03:34:32 — core#223569235
Updated for v2 2018–10–18 02:41:11 — core#223613892
— — — — — — — — — — — — — — — — — — — — — — — — — — — — 4th and 5th place: 250 USD / each (in no particular order) — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Github User: brtastic
Adjust close button styles to fit new alignment 2018–10–30 10:10:08 — explorer#226736549
Unit tests: language menus, total forged counter 2018–10–30 03:53:36 — explorer#226727105
Fix unknown getter error in unit tests 2018–10–30 03:45:35 — explorer#226689868
Additional unit tests (header components, minor: store, i18n) 2018–10–29 09:21:21 — explorer#226382410
Language switcher UI 2018–10–28 09:48:14 — explorer#225555992
Added Polish translations 2018–10–23 07:14:26 — explorer#224558673
Price chart UI improvements 2018–10–23 07:07:13 — explorer#224561243
Github User: JeremiGendron
docs: send transactions with core-tester-cli 2018–10–29 09:10:09 — docs#226432457
docs: javascript crypto identities 2018–10–29 09:10:00 — docs#226339838
misc: help message wording 2018–10–29 03:25:20 — core#226394638
docs: many changes (78, 79, 90, 82, 85) 2018–10–28 17:22:09 — docs#226379193
ARK to Ark in all cases but env variables 2018–10–28 15:41:47 — core-commander#226375785
docs: write a plugin 2018–10–28 01:15:14 — docs#225926567
Refactor Ark explorer to only Mobile 2018–10–25 02:20:39 — docs#224555721
[WIP] Fix/improve experience 2018–10–17 05:33:06 — docs#223464087
test: graphql blocks error and relationships 2018–10–12 11:58:11 — core#222280031
test: verify that the tx are genesis tx 2018–10–12 11:53:52 — core#222348499
transactions by blockId 2018–10–12 11:49:24 — core#222288946
— — — — — — — — — — — — — — — — — — — — — — — — — — — — 6th-8th place: 50 USD / each (no particular order) — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Github User: paroxysm
test(core-tester-cli): vote command 2018–10–29 04:07:32 — core#226425670
Revert “chore: use static log file path (#1129)” 2018–10–28 04:35:51 — core#226341400
test(core-tester-cli): additional test coverage for transfer command 2018–10–25 02:12:51 — core#225600969
Add core-tester-cli option for custom smartBridge values 2018–10–10 06:20:14 — core#221584476
Fix: core-tester-cli/transfer, undefined passphrase when overridingPassphrase 2018–10–09 21:23:39 — core#221554036
Github User: n4ru
Add a sanity check on rebuild to prevent future generator errors due … 2018–10–12 06:25:15 — ARKcommander#222360835
Fix spam. 2018–10–09 19:11:39 — ark-node#221552772
1.2.0 Hard Fork 2018–10–09 18:49:19 — ark-node#221542077
Github User: nge6427
Set block.ready to false if already present 2018–10–13 10:20:30 — ark-node#222640363
bignum fixes and other minor corrections 2018–10–11 16:51:35 — ark-node#222191432
— — — — — — — — — — — — — — — — — — — — — — — — — — — — 9th+ place: PR’s (no particular order) — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Github User: Pedro-Souza
chore: fix contribution link 2018–10–23 03:51:44 — swift-crypto#224838129
replace checkbox to toggle 2018–10–20 11:55:49 — mobile-wallet#223435951
better ux for button create 2018–10–20 11:54:59 — mobile-wallet#222690739
update translations 2018–10–20 11:54:15 — mobile-wallet#221238327
not allow negative value 2018–10–08 03:39:08 — qrcode#220984654
Github User: Ciband
Clear tslint errors 2018–10–26 13:33:15 — ark-ts#199250117
feat: Add Windows Support 2018–10–25 04:59:40 — cpp-client#225619094
feat: add codecov support 2018–10–25 04:30:37 — cpp-crypto#225618198
chore: Add github templates 2018–10–25 03:53:22 — cpp-crypto#225486872
chore: Add github templates 2018–10–25 03:52:56 — cpp-client#225488220
feat: add codecov support 2018–10–25 02:10:26 — cpp-client#225579785
Github User: rodxavier
Add tests for retrying requests 2018–10–30 05:42:03 — python-client#226794043
Run deployer in docker 2018–10–30 00:25:43 — deployer#225249346
Request retry mechanism 2018–10–26 10:10:37 — python-client#225277810
Github User: h1psterx
Fix critical vulnerabilities 2018–10–20 12:14:42 — mobile-wallet#224470911
Update nvm version 2018–10–20 12:10:42 — mobile-wallet#224465239
Add pull to refresh on wallet dashboard 2018–10–20 12:09:54 — mobile-wallet#224263802
Github User: thomas-neuman
feat(client): implement top wallets API 2018–10–31 07:20:03 — python-client#227178688
Github User: alucqrd
Fixed qrcode regex analysis to read the parameters (amount, vendorFie… 2018–10–15 12:33:16 — mobile-wallet#207948885
Github User: geops22
refactor: only create a psql account for the system user 2018–10–26 03:01:42 — core-commander#225956329
Github User: alessiodf
Use pm2 describe rather than pm2 status to check status of relay/forger/explorer 2018–10–16 14:33:08 — core-commander#223269320
Github User: ilgio
create DB before to add privileges on database 2018–10–02 11:23:41 — core-commander#219660058
Github User: 40n4ld
Fix eslint errors (fix #1062) 2018–10–04 17:00:16 — core#220435043
— — — — — — — — — — — — — — — — — — — — — — — — — — — — SECURITY BOUNTY — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Github User: wownmedia
Reported vulnerability in ARK Node due to 3rd party dependency (ExpressJS) vulnerability. Resolved here : https://github.com/ArkEcosystem/ark-node/commit/608c7a900e8cf79360a371ab1dd0f0dc121a4ec8
I missed these bounties — What is this all about?
Read the full instructions here: https://blog.ark.io/ark-github-development-bounty-113806ae9ffe
Are there any other development bounties going on?
Yes, there is also the community run ACF (ARK Community Fund) — read more here:
http://arkcommunity.fund/
And remember to join our Slack!
Follow us on social media ( Twitter | Facebook | Reddit ), join our community ( Slack | Discord ) and stay tuned to our blog on Medium and on Steemit.
October 2018: GitHub Development Bounty Winners was originally published in ARK.io | Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Source
0 notes
Text
Python Web Development with Django
Python Web Development with Django Download Introduction Welcome to Django! Greetings, and welcome to Django! We’re glad to have you along on our journey.You will discover a powerful Web application framework that lets you do everything rapidly—from designing and developing the original application to updating its features and functionality without requiring major changes to the codebase. About This Book Several Django books are already on the market, but ours differs from most in that we focus equally on three areas: Django basics, a variety of example applications, and advanced Django topics. Our intent is to make this the most well-rounded book on the subject, one you find useful regardless of background, and which will give you a complete picture of the framework and what you can do with it. Chapter Guide In Figure 0.1, you see recommended starting points for your reading depending on your Python and Django experience. Of course, we recommend a cover-to-cover treatment, but the diagram will help if time is not on your side. Regardless of your experience, you are always welcome to look at the applications because reading and studying code is one of the best ways to learn.We also provide this chapter-by-chapter reading guide to further help direct you to where you need to read. Part I, “Getting Started” Part I covers the basic material needed to introduce users new to Django and/or Python, although we recommend Chapter 3,“Starting Out,” even to advanced readers. Chapter 1, “Practical Python for Django” This chapter is an introduction for readers who are new to Python. In one comprehensive chapter, we show you not only the syntax, but also go a bit more in-depth and expose you to Python’s memory model and data types, especially constructs commonly used in Django. Chapter 2, “Django for the Impatient: Building a Blog” This is a chapter for those who want to skip any Python introduction and want to dive immediately into a Django application that can be completed in 15-20 minutes. It gives a good overview of what’s possible with the framework. Chapter 3, “Starting Out” For those with a bit more patience, this chapter serves as an introduction to all the foundations of developing Web-based applications (useful both for newbies and experienced coders alike). Once the formalities are over, we describe how each concept fits into the world of Django as well as what its philosophies are and how it may differ from other Web application frameworks. Part II, “Django in Depth” Part II covers all the basic components of the framework, laying the foundation for the example applications in Part III,“Django Applications by Example.” Chapter 4, “Defining and Using Models” In Chapter 4, learn how to define and work with your data model, including the basics of Django’s object-relational mapper (ORM) from simple fields up to complex relations. Chapter 5, “URLs, HTTP Mechanisms, and Views” This chapter goes into detail on how Django handles URL processing and the rest of the HTTP protocol, including middleware layers, as well as how to use Django’s timesaving generic views, and how to write custom or partially custom views from scratch. Chapter 6, “Templates and Form Processing” Chapter 6 covers the final major piece of the framework, where we explore Django’s template language and its form-handling mechanisms. It covers how to display data to your users and get data back from them. Part III, “Django Applications by Example” In Part III, we create four distinct applications, each highlighting a different aspect or component of Django development, both to introduce new general ideas and to expand on the concepts found in Parts I and II. Chapter 7, “Photo Gallery” In Chapter 7, learn how to apply the “Don’t Repeat Yourself” convention to your URL structure and create a new thumbnail-creating image form field, while you make a simple photo gallery application. Chapter 8, “Content Management System” Chapter 8 contains two related approaches to creating a CMS or CMS-like system and covers the use of a number of “contrib” Django applications. Chapter 9, “Liveblog” Chapter 9 covers writing a “liveblog”—a site that makes use of advanced JavaScript techniques, serves as a backdrop for applying AJAX to a Django project, and shows how easy it is to use any AJAX toolkit you want. Chapter 10, “Pastebin” In Chapter 10, learn the power of Django’s generic views as we create a pastebin using almost no custom logic whatsoever. Part IV, “Advanced Django Techniques and Features” Part IV is a collection of advanced topics, ranging from customizing Django’s admin application to writing command-line scripts that interface with your Django applications. Chapter 11, “Advanced Django Programming” Chapter 11 covers a number of topics related to fleshing out your own application’s code, such as RSS generation, extending the template language, or making better use of the Django admin application. Chapter 12, “Advanced Django Deployment” In Chapter 12, learn a number of tricks related to deploying Django applications or working with your app from outside your Django project’s core code, such as commandline scripts, cron jobs, testing, or data import. Part V, “Appendices” Part V fills in the remaining gaps or addresses topics relevant to the rest of the book but that don’t fit in well as full chapters. Learn the basics of the Unix command line, Django installation and deployment strategies, tools for development, and more. Appendix A, “Command Line Basics” Appendix A is an introduction to the Unix command line for those who haven’t been exposed to it before now.Trust us—it’s useful! Appendix B, “Installing and Running Django” In Appendix B, learn how to install all the necessary components for running Django, including the various options for database and Web servers, as well as some tips on specific deployment strategies. Appendix C, “ Tools for Practical Django Development” Appendix C outlines some basic development tools you may or may not be familiar with, including source control, text editors, and more. Appendix D, “Finding, Evaluating, and Using Django Applications” Good developers write code, but great developers reuse somebody else’s code! In Appendix D, we share some tips on the where and how of finding reusable Django applications. Appendix E, “Django on the Google App Engine” Appendix E provides an exclusive look at how Google’s new App Engine leverages Django, and you can also learn how to enable your Django applications to run under the App Engine framework. Appendix F, “Getting Involved in the Django Project” In Appendix F, learn how to contribute to Django and become a part of the community Via TimoBook
0 notes
Text
Blog For Assignments From Data Science Toolkit Course.
Assignment-1 ANOVA
Examined the NESARC data to determine the association between the mean number of cannabis joints consumed (Quantitative Response Variable) and the declared ethnicities.
This submission is split into the following sections:
Hypotheses
Data
Methodology
Results
Summary
Code (Appendix I)
Output from Code (Appendix II)
Hypothesis :
Null Hypothesis: There are no significant differences between the means of monthly cannabis joints smoked by respondents aged 18 - 60 and their declared ethnicities.
Alternate Hypothesis: There are significant differences between the means of monthly cannabis joints smoked by respondents aged 18 - 60 and their declared ethnicities.
Data
The as subset of NESARC data was used, which showed respondents of all declared ethnicities between the ages of 18-60 years who had smoked cannabis in the last 12 months.
Exploratory Variable - Declared Ethnicities (5 Levels)
Response Variable - The mean monthly cannabis joints had in the last 12 months by respondents aged between 18 - 60.
Methodology
The number of days a joint was had in a month was multiplied by the declared number of joints had per day in the last 12 months to work out the number of joints consumed in a month.
ANOVA was performed between the Exploratory Variable Declared Ethnicities and the Response Variable ( the average number of joints smoked by a subset of respondents aged 18 - 60).
The F-statistic and associated P-value were calculated using the Ordinary Least Squares (OLS) function from the statsmodel library.
Post Hoc tests were run using Tukey’s Multiple Comparison Of Means.
Results
American Indians and Alaska Natives (Group 3) were on the average, the highest users of Cannabis, consuming 75.4 joints per month. Those who described themselves as White (Group 1) were on average the least consumers of Cannabis joints.
Differences between the means were observed across all groups. The Mean Differences ranged from (-2.47) between Group 4 & 5 (i.e. between Asian / Pacific Hawaiian and Hispanic or Latino) to 37.6 between Groups 1 and 3 (i.e. between White and American Indian.
The F-Statistic was 3.451 and the P-Value was 0.00824.
Summary
The P-Value is below 0.05 (P < 0.05) so we reject the null hypothesis of no significant differences between the means for monthly cannabis joints consumed and the respondents’ declared ethnicities.
To determine how the means were different from each other, a Tukey Post Hoc Multiple Comparison of Means Test was performed. The Post Hoc tests showed that in all but the difference in the means between ethnic Group 1 (White) and Group 2 (Black), where the null-hypothesis could be rejected, there were no significant differences between the pair-wise comparisons of the means of monthly Cannabis consumption for the declared ethnicities.
Appendix I
Code:
# -*- coding: utf-8 -*- """ Created on Sun. 1st April 2018
@author: Lloney Monono """
import numpy import pandas import statsmodels.formula.api as statsmodform import statsmodels.stats.multicomp as statsmodmulti
data = pandas.read_csv('C:\\Users\\Llone\\Documents\\Data Scientists Toolbox\\Python\\nesarc.csv', low_memory=False)
#How often used cannabis in the last 12 months casting to numeric data['S3BD5Q2C'] = data['S3BD5Q2C'].convert_objects(convert_numeric=True) #Number Of Cannabis Joints Usually consumed in a day in the last 12 Months data['S3BQ3'] = data['S3BQ3'].convert_objects(convert_numeric=True)
# A subset data of adults age 18 to 60 who have had at least one joint every month # in the past 12 months sub1=data[(data['AGE']>=18) & (data['AGE']<=60) & (data['S3BD5Q2C'] <= 6 ) ]
#Missing data not to be included in analysis sub1['S3BD5Q2C']=sub1['S3BD5Q2C'].replace(99, numpy.nan) sub1['S3BQ3']=sub1['S3BQ3'].replace(99, numpy.nan)
#Recoding number of days a joint was had in the past month. Ignoring anything # below 1x a month. See aove condition <= category 6 recode1 = {1: 30, 2: 24, 3: 16, 4: 8, 5: 3, 6: 1} sub1['JOINTFREQMO']= sub1['S3BD5Q2C'].map(recode1)
#Casting variable JOINTFREQMO to numeric sub1['JOINTFREQMO']= sub1['JOINTFREQMO'].convert_objects(convert_numeric=True)
# Estimating the number of Joints consumed in a month by multiplying the frequency by the # frequency by the number of joints per day sub1['EST_JOINTCOUNT']=sub1['JOINTFREQMO'] * sub1['S3BQ3']
sub1['EST_JOINTCOUNT']= sub1['EST_JOINTCOUNT'].convert_objects(convert_numeric=True)
ct1 = sub1.groupby('EST_JOINTCOUNT').size() print (ct1)
#Looking at More than two levels ANOVA using Ethnicity (sub3) and Marital status (sub4) # first Ethnicity sub3 = sub1[['EST_JOINTCOUNT', 'ETHRACE2A']].dropna()
model2 = statsmodform.ols(formula='EST_JOINTCOUNT ~ C(ETHRACE2A)', data=sub3).fit() print (model2.summary())
print ('means for EST_JOINTCOUNT by by Declared Ethnicity') m2= sub3.groupby('ETHRACE2A').mean() print (m2)
print ('Standard deviations for EST_JOINTCOUNT by Declared Ethnicity') sd2 = sub3.groupby('ETHRACE2A').std() print (sd2)
#Post Hoc Test using Tukey mc1 = statsmodmulti.MultiComparison(sub3['EST_JOINTCOUNT'], sub3['ETHRACE2A']) res1 = mc1.tukeyhsd() print(res1.summary())
Appendix II
Output:
dtype: int64 OLS Regression Results ============================================================================== Dep. Variable: EST_JOINTCOUNT R-squared: 0.014 Model: OLS Adj. R-squared: 0.010 Method: Least Squares F-statistic: 3.451 Date: Sun, 01 Apr 2018 Prob (F-statistic): 0.00824 Time: 15:53:51 Log-Likelihood: -5913.5 No. Observations: 978 AIC: 1.184e+04 Df Residuals: 973 BIC: 1.186e+04 Df Model: 4 Covariance Type: nonrobust ===================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------- Intercept 37.8015 4.376 8.638 0.000 29.214 46.389 C(ETHRACE2A)[T.2] 27.7614 8.484 3.272 0.001 11.112 44.411 C(ETHRACE2A)[T.3] 37.6039 17.415 2.159 0.031 3.428 71.780 C(ETHRACE2A)[T.4] 12.3860 26.004 0.476 0.634 -38.645 63.417 C(ETHRACE2A)[T.5] 9.9161 8.863 1.119 0.263 -7.476 27.308 ============================================================================== Omnibus: 1458.578 Durbin-Watson: 2.077 Prob(Omnibus): 0.000 Jarque-Bera (JB): 608052.290 Skew: 8.478 Prob(JB): 0.00 Kurtosis: 123.971 Cond. No. 8.28 ==============================================================================
Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. means for EST_JOINTCOUNT by by Declared Ethnicity EST_JOINTCOUNT ETHRACE2A 1 37.801457 2 65.562814 3 75.405405 4 50.187500 5 47.717514 Standard deviations for EST_JOINTCOUNT by Declared Ethnicity EST_JOINTCOUNT ETHRACE2A 1 75.687860 2 156.727470 3 139.172846 4 96.341904 5 88.870305 Multiple Comparison of Means - Tukey HSD,FWER=0.05 =============================================== group1 group2 meandiff lower upper reject ----------------------------------------------- 1 2 27.7614 4.5751 50.9476 True 1 3 37.6039 -9.9903 85.1982 False 1 4 12.386 -58.6814 83.4535 False 1 5 9.9161 -14.3047 34.1368 False 2 3 9.8426 -40.3248 60.01 False 2 4 -15.3753 -88.1911 57.4404 False 2 5 -17.8453 -46.797 11.1064 False 3 4 -25.2179 -109.0615 58.6257 False 3 5 -27.6879 -78.3417 22.9659 False 4 5 -2.47 -75.6217 70.6818 False -----------------------------------------------
0 notes
Text
Neural Compute Stick: Object Detection with Neural Networks
Convolutional Neural Networks have become the first choice to extract information from visual data, they are used for example in the Google search engine to classify images. Essentially, they mimic the way a human being recognizes images. In order to do that, the system learns to recognize certain characteristics of an object. In the case of a person this could be the limbs or head and face. It then produces a trained model. Whats great about it: The algorithm learns the characteristics of an object on it’s own in the training process, there’s no need to point them out manually. Of course, the system only detects objects it was previously trained with. The downside: You usually need a pretty fast graphics card to train a model or run inferences in it.
In this post, we use neural networks (specificly CNNs) to classify objects in a Video-Stream on a cheap, ordinary PC without a dedicated GPU at all. This is made possible by the recently released Movidius Neural Compute Stick.
youtube
Movidius Neural Compute Stick
The Neural Compute Stick (NCS) is a tiny computer meant to accelerate the execution of neural networks. The entire Hardware is packed into an USB-stick, which is compatible with USB 2.0 or later. However, it is recommended to use the device on an USB 3.0-port if possible. Internally, the NCS uses a so called Vision Processing Unit that goes by the name of Myriad 2.This relatively new kind of micro-processors is specifically made for machine-vision related tasks and thus very energy-efficient. According to the manufacturer the typical power consumption of the VPU is around 1 Watts. The NCS is on sale for around 80 US-Dollars, but only available in a few online-shops as of September 2017.
The Movidius Neural Compute Stick is not a universal solution for deep learning. It is not possible to train a model on it, instead, you can only run inferences on input data like a video stream with a pre-trained model. This means you will need to train your model first on your computer, which as already mentioned can take lots of time. Luckily, the NCS-community already provides some pre-trained models that are available for free. Theoretically every model that was made with the deep-learning library Caffe (.caffemodel) is compatible, but you will have to convert it for the NCS. Also, make sure the model uses one input variable only. The NCS is currently limited to that. In the screenshot below we see the classification of images from a webcam input stream. This example uses the GoogLeNet.
Classification of images from a webcam stream on the NCS running GoogLeNet.
Vorbereitungen und Setup:
The software you need to program the Neural Compute Stick and use it on the target platform is available for download on the Movidius page. That includes some example applications like the code for the application in the screenshot above. The models for that are also available for download, among which are some widely known ones like the AlexNet or the GoogLeNet.
Requirements for the development platform:
x64-PC running a native Ubuntu Linux, virtual machines are not supported
Windows and MacOS are (currently) not supported
The Linux distribution has to be an Ubuntu 16.04 LTS.
On Ubuntu you need to have Python version 3.5.2 installed.
Note: The target platform does not have to meet these requirements. In case you are wondering: Yes, you can also plug it into your Raspberry Pi. Movidius mentions explicitly, that the NCS is compatible.
Requirements to run the example code for the NCS:
You need to have a webcam (internal or external)
Gstreamer 1.0 and its Plugins “Basic”, “Good”, “Bad” and “Ugly” have to be installed.
Installation instructions for Gstreamer: http://ift.tt/2ymjyrY.
Installing Toolkit and API:
Before we can use the Movidius Neural Compute Stick, the API and Toolkit have to be installed. The Toolkit is used to convert or test a model for the NCS, the Movidius API allows to access the functionality of the Neural Compute Stick, for example wih Python. Here is the download page: http://ift.tt/2uJZWMj. The installation is done by simply running a Bash-script in the unzipped Toolkit/API folder.
$ ./setup.sh
Keep in mind that the script, especially the one for the Toolkit, can take a lot of time to complete (15-30 minutes) an needs a stable internet connection.
Before we can use a model with the stick, we have to convert it with this script from the Toolkit:
$ python3 ./mvNCCompile.pyc sample_network.prototxt -w sample_network.caffemodel -s 12 -o name_of_outputfile
We can now use the generated “graph” file (default name, if not specified) in an application for the NCS.
Object detection with Tiny YOLO
Now comes the hard part. We want to be able to find various objects in a given scene and identify what they are. The provided models by Movidius are not up to this task. They can only classify one object at a time respecitvely per frame. They are based on the assumption that an input image shows only one relevant object in close-up. YOLO (You Only Look Once) in contrast is a neural network which can detect and localize multiple objects in one frame. On top of that, YOLO can tell persons apart from objects in a given scene. Tiny YOLO is the small brother of YOLO, a resource saving alternative for weaker devices. Thanks to various optimizations it enables the NCS to run object detection almost in realtime (approximately 0.2s processing time per frame). Naturally, this comes at a cost and so the error rate increases noticeably. However, for our purposes the detection is still sufficiently accurate. A few developers on the Movidius Forums have already ported Tiny YOLO to the NCS. The result of their efforts is available for download on GitHub: http://ift.tt/2yo3aaB. Here you can find some example code written in Python and the Tiny YOLO model itself.
Set-up Tiny YOLO on the NCS
To run the sample code for Tiny YOLO, some additional software is required on the development and target platform. You have to build OpenCV as well as ffmpeg from source on your platform. A simple installation via Python’s own package manager pip is not sufficient, because this “light” edition of OpenCV is missing some important parts. This means, OpenCV can probably not access your camera and start the video-stream. Furthermore, you need a webcam with Linux-compatible drivers.
Let’s start with ffmpeg. To install ffmpeg under Linux follow the instructions in the official guide: http://ift.tt/1lboHUu. Once the installation is finished, use the command below in a new terminal to check wether ffmpeg can find and use your camera. Replace /dev/video0 with your video source.
$ ffmpeg -f v4l2 -list_formats all -i /dev/video0
The output should look something like this:
[video4linux2,v4l2 @ 0x2753960] Raw : yuyv422 : YUYV 4:2:2 : 640x480 352x288 320x240 176x144 160x120 1280x720 640x480 [video4linux2,v4l2 @ 0x2753960] Compressed: mjpeg : Motion-JPEG : 640x480 352x288 320x240 176x144 160x120 1280x720 640x480
After that, we’re done with ffmpeg. Let’s move on to OpenCV: http://ift.tt/1YnH1Mj. To roughly check if OpenCV has been installed sucessfully, you can run some samples in your installation folder under /bin if you like.
As already mentioned in the introduction to the Movidius-Stick, the Caffemodel of Tiny YOLO needs to be converted in a format that is compatible with the NCS. The software for this task is included in the Movidius Toolkit. The exact command is:
$ python3 ./mvNCCompile.pyc your_path/yolo_tiny_deploy.prototxt -s 12
Before you execute it, make sure the Caffemodel and the corresponding Prototxt-file are in the same folder. Both files must have the same name, otherwise the conversion will silently fail without throwing an error. The generated model is then useless.
Tiny YOLO on the NCS
Now we are finally ready to see Tiny YOLO in action. As a last step, move the “py_examples” folder from the “yoloNCS” repo to the “ncapi” (the unpacked NCS-API) folder. Alternatively, you can create a symlink. The example code includes these two samples:
yolo_example: Will detect objects with the Tiny YOLO model in an .jpg image and highlights found objects in the image.
yolo_object_detection_app: Will detect objects in a video stream from your webcam and highlights found objects in a video.
Let’s start the “object_detection_app” with Python 3. If everything is set up correctly, you will now see the Video-Stream of your webcam in which Tiny YOLO highlights objects it has learned. The numbers state, to what extent the detected objects resemble the trained template. The level of similarity from which the system considers an object as “detected” is configurable. This means we either end up with more false-positives or false-negatives, depending on our requirements. This last screenshot and the video at the top of this page demonstrate what the Tiny YOLO object detection looks like with a webcam.
Object detection in a webcam stream on the NCS running the Tiny YOLO model.
The model, as it is trained now, detects 20 different things, among which are persons and a bunch of animals. The detection of human beings works best. “Lifeless” items however are sometimes not recognized.
The post Neural Compute Stick: Object Detection with Neural Networks appeared first on codecentric AG Blog.
Neural Compute Stick: Object Detection with Neural Networks published first on http://ift.tt/2fA8nUr
0 notes
Text
Neural Compute Stick: Object Detection with Neural Networks
Convolutional Neural Networks have become the first choice to extract information from visual data, they are used for example in the Google search engine to classify images. Essentially, they mimic the way a human being recognizes images. In order to do that, the system learns to recognize certain characteristics of an object. In the case of a person this could be the limbs or head and face. It then produces a trained model. Whats great about it: The algorithm learns the characteristics of an object on it’s own in the training process, there’s no need to point them out manually. Of course, the system only detects objects it was previously trained with. The downside: You usually need a pretty fast graphics card to train a model or run inferences in it.
In this post, we use neural networks (specificly CNNs) to classify objects in a Video-Stream on a cheap, ordinary PC without a dedicated GPU at all. This is made possible by the recently released Movidius Neural Compute Stick.
youtube
Movidius Neural Compute Stick
The Neural Compute Stick (NCS) is a tiny computer meant to accelerate the execution of neural networks. The entire Hardware is packed into an USB-stick, which is compatible with USB 2.0 or later. However, it is recommended to use the device on an USB 3.0-port if possible. Internally, the NCS uses a so called Vision Processing Unit that goes by the name of Myriad 2.This relatively new kind of micro-processors is specifically made for machine-vision related tasks and thus very energy-efficient. According to the manufacturer the typical power consumption of the VPU is around 1 Watts. The NCS is on sale for around 80 US-Dollars, but only available in a few online-shops as of September 2017.
The Movidius Neural Compute Stick is not a universal solution for deep learning. It is not possible to train a model on it, instead, you can only run inferences on input data like a video stream with a pre-trained model. This means you will need to train your model first on your computer, which as already mentioned can take lots of time. Luckily, the NCS-community already provides some pre-trained models that are available for free. Theoretically every model that was made with the deep-learning library Caffe (.caffemodel) is compatible, but you will have to convert it for the NCS. Also, make sure the model uses one input variable only. The NCS is currently limited to that. In the screenshot below we see the classification of images from a webcam input stream. This example uses the GoogLeNet.
Classification of images from a webcam stream on the NCS running GoogLeNet.
Vorbereitungen und Setup:
The software you need to program the Neural Compute Stick and use it on the target platform is available for download on the Movidius page. That includes some example applications like the code for the application in the screenshot above. The models for that are also available for download, among which are some widely known ones like the AlexNet or the GoogLeNet.
Requirements for the development platform:
x64-PC running a native Ubuntu Linux, virtual machines are not supported
Windows and MacOS are (currently) not supported
The Linux distribution has to be an Ubuntu 16.04 LTS.
On Ubuntu you need to have Python version 3.5.2 installed.
Note: The target platform does not have to meet these requirements. In case you are wondering: Yes, you can also plug it into your Raspberry Pi. Movidius mentions explicitly, that the NCS is compatible.
Requirements to run the example code for the NCS:
You need to have a webcam (internal or external)
Gstreamer 1.0 and its Plugins “Basic”, “Good”, “Bad” and “Ugly” have to be installed.
Installation instructions for Gstreamer: http://ift.tt/2ymjyrY.
Installing Toolkit and API:
Before we can use the Movidius Neural Compute Stick, the API and Toolkit have to be installed. The Toolkit is used to convert or test a model for the NCS, the Movidius API allows to access the functionality of the Neural Compute Stick, for example wih Python. Here is the download page: http://ift.tt/2uJZWMj. The installation is done by simply running a Bash-script in the unzipped Toolkit/API folder.
$ ./setup.sh
Keep in mind that the script, especially the one for the Toolkit, can take a lot of time to complete (15-30 minutes) an needs a stable internet connection.
Before we can use a model with the stick, we have to convert it with this script from the Toolkit:
$ python3 ./mvNCCompile.pyc sample_network.prototxt -w sample_network.caffemodel -s 12 -o name_of_outputfile
We can now use the generated “graph” file (default name, if not specified) in an application for the NCS.
Object detection with Tiny YOLO
Now comes the hard part. We want to be able to find various objects in a given scene and identify what they are. The provided models by Movidius are not up to this task. They can only classify one object at a time respecitvely per frame. They are based on the assumption that an input image shows only one relevant object in close-up. YOLO (You Only Look Once) in contrast is a neural network which can detect and localize multiple objects in one frame. On top of that, YOLO can tell persons apart from objects in a given scene. Tiny YOLO is the small brother of YOLO, a resource saving alternative for weaker devices. Thanks to various optimizations it enables the NCS to run object detection almost in realtime (approximately 0.2s processing time per frame). Naturally, this comes at a cost and so the error rate increases noticeably. However, for our purposes the detection is still sufficiently accurate. A few developers on the Movidius Forums have already ported Tiny YOLO to the NCS. The result of their efforts is available for download on GitHub: http://ift.tt/2yo3aaB. Here you can find some example code written in Python and the Tiny YOLO model itself.
Set-up Tiny YOLO on the NCS
To run the sample code for Tiny YOLO, some additional software is required on the development and target platform. You have to build OpenCV as well as ffmpeg from source on your platform. A simple installation via Python’s own package manager pip is not sufficient, because this “light” edition of OpenCV is missing some important parts. This means, OpenCV can probably not access your camera and start the video-stream. Furthermore, you need a webcam with Linux-compatible drivers.
Let’s start with ffmpeg. To install ffmpeg under Linux follow the instructions in the official guide: http://ift.tt/1lboHUu. Once the installation is finished, use the command below in a new terminal to check wether ffmpeg can find and use your camera. Replace /dev/video0 with your video source.
$ ffmpeg -f v4l2 -list_formats all -i /dev/video0
The output should look something like this:
[video4linux2,v4l2 @ 0x2753960] Raw : yuyv422 : YUYV 4:2:2 : 640x480 352x288 320x240 176x144 160x120 1280x720 640x480 [video4linux2,v4l2 @ 0x2753960] Compressed: mjpeg : Motion-JPEG : 640x480 352x288 320x240 176x144 160x120 1280x720 640x480
After that, we’re done with ffmpeg. Let’s move on to OpenCV: http://ift.tt/1YnH1Mj. To roughly check if OpenCV has been installed sucessfully, you can run some samples in your installation folder under /bin if you like.
As already mentioned in the introduction to the Movidius-Stick, the Caffemodel of Tiny YOLO needs to be converted in a format that is compatible with the NCS. The software for this task is included in the Movidius Toolkit. The exact command is:
$ python3 ./mvNCCompile.pyc your_path/yolo_tiny_deploy.prototxt -s 12
Before you execute it, make sure the Caffemodel and the corresponding Prototxt-file are in the same folder. Both files must have the same name, otherwise the conversion will silently fail without throwing an error. The generated model is then useless.
Tiny YOLO on the NCS
Now we are finally ready to see Tiny YOLO in action. As a last step, move the “py_examples” folder from the “yoloNCS” repo to the “ncapi” (the unpacked NCS-API) folder. Alternatively, you can create a symlink. The example code includes these two samples:
yolo_example: Will detect objects with the Tiny YOLO model in an .jpg image and highlights found objects in the image.
yolo_object_detection_app: Will detect objects in a video stream from your webcam and highlights found objects in a video.
Let’s start the “object_detection_app” with Python 3. If everything is set up correctly, you will now see the Video-Stream of your webcam in which Tiny YOLO highlights objects it has learned. The numbers state, to what extent the detected objects resemble the trained template. The level of similarity from which the system considers an object as “detected” is configurable. This means we either end up with more false-positives or false-negatives, depending on our requirements. This last screenshot and the video at the top of this page demonstrate what the Tiny YOLO object detection looks like with a webcam.
Object detection in a webcam stream on the NCS running the Tiny YOLO model.
The model, as it is trained now, detects 20 different things, among which are persons and a bunch of animals. The detection of human beings works best. “Lifeless” items however are sometimes not recognized.
The post Neural Compute Stick: Object Detection with Neural Networks appeared first on codecentric AG Blog.
Neural Compute Stick: Object Detection with Neural Networks published first on http://ift.tt/2vCN0WJ
0 notes
Text
ANOVA test on ADDHEALTH data
This is a blog about ANOVA test for ADDHEALTH data.
Data: The National Longitudinal Study of Adolescent Health (AddHealth) is a representative school-based survey of adolescents in grades 7-12 in the United States. The Wave 1 survey focuses on factors that may influence adolescents’ health and risk behaviors, including personal traits, families, friendships, romantic relationships, peer groups, schools, neighborhoods, and communities.
Research Question: Is BMI of the population sample dependent on the number of TV hours watched
Null Hypothesis: BMI of the subjects on sample are not effected by number of Television hours per week
Number of hours per week has been categorized in to three bins. -1- 7, 7-18, 18-50 to see mean differences between the groups. Thus ANOVA has been used to analyze this research question since more than two levels are involved.
Below the steps performed to reach the data sample have been explained .
The following snippets of code is written in python to read data and perform the analysis.
import pandas import numpy import statsmodels.formula.api as smf import statsmodels.stats.multicomp as multi
data = pandas.read_csv('addhealth.csv', low_memory=False)
Below are the descriptive statistics for number of hours of TV watched per week. There are total of 6504 observations. The max of 998 is not realistic. Upon cross reference with code book, this means data is missing or refused by subject.
data["H1DA8"].describe()
count 6504.000000 mean 20.308272 std 64.930714 min 0.000000 25% 5.000000 50% 12.000000 75% 21.000000 max 998.000000 Name: H1DA8, dtype: float64
Below are the descriptive statistics height columns in foot and inches. Max of 98 or 99 is observed it is a code for no response or data not available.
data["H1GH59A"].describe()
count 6504.000000 mean 6.283210 std 10.485217 min 4.000000 25% 5.000000 50% 5.000000 75% 5.000000 max 98.000000 Name: H1GH59A, dtype: float64
In [5]:
data["H1GH59B"].describe()
Out[5]:
count 6504.000000 mean 6.582565 std 11.582289 min 0.000000 25% 3.000000 50% 5.000000 75% 8.000000 max 99.000000 Name: H1GH59B, dtype: float64
Filtering data based on height between 4 feet and 6 feet, tvhours watched perweek less than 50 hours/week, and excluding data on height, TV hours and weight with no responses.
In [6]:
sub1=data[(data['H1GH59A']>=4) & (data['H1GH59A']<=6) & (data['H1GH59B']<=12) & (data['H1DA8']<=50) & (data['H1GH60'] <= 500)]
In [7]:
sub1['H1GH59A'].describe()
Out[7]:
count 6075.000000 mean 5.086914 std 0.381950 min 4.000000 25% 5.000000 50% 5.000000 75% 5.000000 max 6.000000 Name: H1GH59A, dtype: float64
sub1['H1GH59B'].describe()
Out[8]:
count 6075.000000 mean 5.244774 std 3.269102 min 0.000000 25% 3.000000 50% 5.000000 75% 8.000000 max 11.000000 Name: H1GH59B, dtype: float64
Caclulate height in inches
In [9]:
sub1['HEIGHT']=sub1['H1GH59A'] * 12 + sub1['H1GH59B']
C:\Users\haree\Anaconda3\lib\site-packages\ipykernel\__main__.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy if __name__ == '__main__':
In [10]:
sub1["HEIGHT"].describe()
Out[10]:
count 6075.000000 mean 66.287737 std 4.121049 min 48.000000 25% 63.000000 50% 66.000000 75% 69.000000 max 81.000000 Name: HEIGHT, dtype: float64
Weight descriptive statistics
In [11]:
sub1['H1GH60'].describe()
Out[11]:
count 6075.000000 mean 141.106831 std 34.074604 min 50.000000 25% 118.000000 50% 135.000000 75% 160.000000 max 360.000000 Name: H1GH60, dtype: float64
Calculate BMI = 703 * Weight (lbs)/ height^2 (inch^2)
In [12]:
sub1['BMI'] = sub1.apply(lambda row: 703*row['H1GH60']/row['HEIGHT']**2, axis = 1)
C:\Users\haree\Anaconda3\lib\site-packages\ipykernel\__main__.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy if __name__ == '__main__':
Convert TVHOURS into categorical variable with 4 bins -1 to 7, 7 to 18, 18 to 50
In [13]:
sub1['TVHOURS'] = pandas.cut(sub1['H1DA8'], bins=[-1, 7, 18, 50], labels=False)
C:\Users\haree\Anaconda3\lib\site-packages\ipykernel\__main__.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy if __name__ == '__main__':
In [14]:
model1 = smf.ols(formula='BMI ~ C(TVHOURS)', data=sub1) results1 = model1.fit() print (results1.summary())
OLS Regression Results ============================================================================== Dep. Variable: BMI R-squared: 0.003 Model: OLS Adj. R-squared: 0.003 Method: Least Squares F-statistic: 9.690 Date: Sun, 16 Jul 2017 Prob (F-statistic): 6.29e-05 Time: 15:26:12 Log-Likelihood: -17593. No. Observations: 6075 AIC: 3.519e+04 Df Residuals: 6072 BIC: 3.521e+04 Df Model: 2 Covariance Type: nonrobust =================================================================================== coef std err t P>|t| [0.025 0.975] ----------------------------------------------------------------------------------- Intercept 22.1619 0.093 237.366 0.000 21.979 22.345 C(TVHOURS)[T.1] 0.2712 0.137 1.979 0.048 0.003 0.540 C(TVHOURS)[T.2] 0.5985 0.136 4.401 0.000 0.332 0.865 ============================================================================== Omnibus: 1822.675 Durbin-Watson: 1.935 Prob(Omnibus): 0.000 Jarque-Bera (JB): 6241.254 Skew: 1.498 Prob(JB): 0.00 Kurtosis: 6.960 Cond. No. 3.60 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
In [15]:
sub2 = sub1[['BMI', 'TVHOURS']].dropna()
In [18]:
print ('means for BMI by TVHOURS') m1= sub2.groupby('TVHOURS').mean() print (m1)
means for BMI by TVHOURS BMI TVHOURS 0 22.161924 1 22.433088 2 22.760428
From the model results P value of Prob (F-statistic): 6.29e-05
and
F-statistic: 9.690 is obtained from which we can reject the null hypothesis(p<0.05) But the means calculated above does not show significant difference among the groups and does not explain which group is significantly different from others. So we have performed a post hoc test below find out the group which is significantly different.
In [17]:
print ('standard deviations for BMI by TVHOURS') sd1 = sub2.groupby('TVHOURS').std() print (sd1)
standard deviations for BMI by TVHOURS BMI TVHOURS 0 4.025568 1 4.440422 2 4.694750
Tukey’s test has been performed on the groups to see which group is significantly different from others. Below are the results.
In [19]:
mc1 = multi.MultiComparison(sub2['BMI'], sub2['TVHOURS']) res1 = mc1.tukeyhsd() print(res1.summary())
Multiple Comparison of Means - Tukey HSD,FWER=0.05 ============================================ group1 group2 meandiff lower upper reject -------------------------------------------- 0 1 0.2712 -0.05 0.5924 False 0 2 0.5985 0.2797 0.9173 True 1 2 0.3273 -0.0028 0.6574 False --------------------------------------------
Inference: When the ANOVA test has been performed on the data a p value of <0.05 indicated that there is enough evidence to reject the null hypothesis. That means there is relation between the number of Television hours per week to the BMI index of the sample population. But the means were not significantly different to say which group is statistically different from the other. so a posthoc Tukey test has been performed on the data. Above are the results which clearly shows that groups 0 and 2 are statistically different. That means as TV hours increase (bin 2 being the bin with maximum Television hours BMI index is increasing). The young kids who watch Television for 0-7 hours have significantly lower BMI index than kids that watch Television between 30-50 hours a week.
0 notes
Text
MUDF - Assignment 3 (python)
1) CODE
# -*- coding: utf-8 -*- """ Created on 04/23/17
@author: kb """ import pandas as pd import numpy as np # any additional libraries would be imported here
#Had error reading csv with utf-8 encoding so changed to "ISO-8859-1" and error was resolved data = pd.read_csv('mudf_ART_df3.csv', encoding = "ISO-8859-1", low_memory=False)
print ('Number of observations (rows)') print (len(data)) #number of observations (rows)
print ('Number of variables (columns)') print (len(data.columns)) # number of variables (columns)
#setting variables you will be working with to numeric # update convert_objects(convert_numeric=True) which has been deprecated to pd.to_numeric(data['LOCALE4'] #lOCALE4: 1=CITY; 2=SUBURB; 03=TOWN; O4=RURAL data['LOCALE4'] = pd.to_numeric(data['LOCALE4']) #BEARAG: 1= New Englan; 2 Mid-EAST; 3=Great Lakes; 4 Plains; 5=Southeast; 6=Southwest; 7=Rocky Mountains; 8=Far West data['BEAREG'] = pd.to_numeric(data['BEAREG']) #NAICS: North American Classification system: https://www.census.gov/eos/www/naics/2017NAICS/2017_NAICS_Manual.pdf data['NAICS'] = pd.to_numeric(data['NAICS']) #IPEDS: Unique identifier for colleges and universities: https://nces.ed.gov/ipeds # 1 = IPEDs does not equal 0, 0 = IPEDS equals 0 data['IPEDS'] = pd.to_numeric(data['IPEDS'])
#ADDING TITLES print ('Counts for LOCALE4: 1=CITY; 2=SUBURB; 03=TOWN; O4=RURAL') #data['LOCALE4'] = data['LOCALE4'].astype(str) ct1= data.groupby('LOCALE4').size() print (ct1) print ('Percentages for LOCALE4') pt1 = data.groupby('LOCALE4').size() * 100 / len(data) print (pt1)
print ('Counts for BEAREG: 1= New England; 2 Mid-EAST; 3=Great Lakes; 4 Plains; 5=Southeast; 6=Southwest; 7=Rocky Mountains; 8=Far West') ct2= data.groupby('BEAREG').size() print (ct2) print ('Percentages for BEAREG') pt2 = data.groupby('BEAREG').size() * 100 / len(data) print (pt2)
data['NAICS'] = data['NAICS'].astype(str) print ('Counts for NAICS: North American Classification system for types of businesses') ct3= data.groupby('NAICS').size() print (ct3) print ('Percentages for NAICS: https://www.census.gov/eos/www/naics/2017NAICS/2017_NAICS_Manual.pdf') pt3 = data.groupby('NAICS').size() * 100 / len(data) print (pt3)
#create new column: EDU_INST=yes identifies art museums and galleriesassociated with a college or university # EDU_INST=no indicates art museums and galleries not associated with college or university #EDU_INST = yes indicates an IPEDS indentifier was included in IPEDS column (Unique identifier for colleges and universities: https://nces.ed.gov/ipeds)
data['EDU_INST'] = np.where(data['IPEDS'] > 0, 'yes', 'no')
print ('Counts of Art Museums and Galleries that are associated with a college or university')
pd.set_option('display.float_format', lambda x:'%f'%x)
ct4= data.groupby('EDU_INST').size()
print (ct4)
print ('Percentage of Art Museums and Galleries that are associated with a college or university') pt4 = data.groupby('EDU_INST').size() * 100 / len(data) print (pt4)
#recoding values for LOCALE4 into a new variable, LOCALE4_NAME recode1 = {1: "CITY", 2: "SUBURB", 3: "TOWN", 4: "RURAL"} data['LOCALE4_NAME']= data['LOCALE4'].map(recode1)
recode2 = {'519120': 'Arts Library', '611000' : 'Arts Education Center (non-accredited)', '611210': '2-Year Community College', '611310': '4-Year College or University', '611610':'4-Year Arts College (accredited)','710000':'Art Center or Association', '711100':'Performing Art Center', '712110':'Art Museum or Gallery'} data['NAICS_BASED_CATEGORY']= data['NAICS'].map(recode2)
# subset variables in new data frame, sub1 sub1=data[['MID','GZIP5', 'IPEDS', 'EDU_INST', 'NAICS', 'NAICS_BASED_CATEGORY', 'BEAREG', 'LOCALE4_NAME', 'LOCALE4', 'GCITY', 'GSTATE']]
#reduce to subset of values required for Frequency table to show how NAICS art institutions frequency in LOCALE4 areas: CITY, SUBURB, TOWN, RURAL sub2=sub1[['NAICS', 'NAICS_BASED_CATEGORY', 'LOCALE4_NAME', 'LOCALE4']]
#use groupby to group values for LOCALE4 for each of the NAICS variables sub3=sub2.groupby(['NAICS_BASED_CATEGORY','LOCALE4_NAME'])
# use size to print all four LOCALE4 values for each NAICS -- see https://community.modeanalytics.com/python/tutorial/pandas-groupby-and-python-lambda-functions/ print ('Frequency table created using Pandas size to sum number of each LOCALE4 value for each NAICS value') sub4=sub3.size() print (sub4)
# use unstack to display LOCALE4 values as four columns for each type of LOCALE4 -- see https://community.modeanalytics.com/python/tutorial/pandas-groupby-and-python-lambda-functions/ print ('Frequency table created using Pandas size, and adding unstack to reformat so LOCALE4 variables are shown as columns') sub5=sub3.size().unstack() #display nan values as 0 and float with no decimals sub5=sub5.fillna(0) pd.options.display.float_format = '{:,.0f}'.format
print (sub5)
#upper-case all DataFrame column names - place afer code for loading data aboave data.columns = map(str.upper, data.columns)
# bug fix for display formats to avoid run time errors - put after code for loading data above #pd.set_option('display.float_format', lambda x:'%f'%x)
2) OUTPUT
runfile('C:/Users/KMB2/Documents/Python_Wesleyan/ASS_2/ASS_3_codebook_03c_final_02.py', wdir='C:/Users/KMB2/Documents/Python_Wesleyan/ASS_2') Number of observations (rows) 3241 Number of variables (columns) 44 Counts for LOCALE4: 1=CITY; 2=SUBURB; 03=TOWN; O4=RURAL LOCALE4 1 1406 2 697 3 372 4 766 dtype: int64 Percentages for LOCALE4 LOCALE4 1 43 2 22 3 11 4 24 dtype: float64 Counts for BEAREG: 1= New England; 2 Mid-EAST; 3=Great Lakes; 4 Plains; 5=Southeast; 6=Southwest; 7=Rocky Mountains; 8=Far West BEAREG 1 247 2 575 3 455 4 308 5 680 6 300 7 151 8 525 dtype: int64 Percentages for BEAREG BEAREG 1 8 2 18 3 14 4 10 5 21 6 9 7 5 8 16 dtype: float64 Counts for NAICS: North American Classification system for types of businesses NAICS 519120 8 611000 7 611210 347 611310 1312 611610 101 710000 410 711000 13 711100 21 712110 1022 dtype: int64 Percentages for NAICS: https://www.census.gov/eos/www/naics/2017NAICS/2017_NAICS_Manual.pdf NAICS 519120 0 611000 0 611210 11 611310 40 611610 3 710000 13 711000 0 711100 1 712110 32 dtype: float64 Counts of Art Museums and Galleries that are associated with a college or university EDU_INST no 1520 yes 1721 dtype: int64 Percentage of Art Museums and Galleries that are associated with a college or university EDU_INST no 46.899105 yes 53.100895 dtype: float64 Frequency table created using Pandas size to sum number of each LOCALE4 value for each NAICS value NAICS_BASED_CATEGORY LOCALE4_NAME 2-Year Community College CITY 86 RURAL 152 SUBURB 94 TOWN 15 4-Year Arts College (accredited) CITY 53 RURAL 19 SUBURB 22 TOWN 7 4-Year College or University CITY 474 RURAL 425 SUBURB 292 TOWN 121 Art Center or Association CITY 187 RURAL 53 SUBURB 91 TOWN 79 Art Museum or Gallery CITY 578 RURAL 111 SUBURB 189 TOWN 144 Arts Education Center (non-accredited) CITY 2 RURAL 1 SUBURB 4 Arts Library CITY 5 TOWN 3 Performing Arts Center CITY 14 RURAL 2 SUBURB 2 TOWN 3 dtype: int64 Frequency table created using Pandas size, and adding unstack to reformat so LOCALE4 variables are shown as columns LOCALE4_NAME CITY RURAL SUBURB TOWN NAICS_BASED_CATEGORY 2-Year Community College 86 152 94 15 4-Year Arts College (accredited) 53 19 22 7 4-Year College or University 474 425 292 121 Art Center or Association 187 53 91 79 Art Museum or Gallery 578 111 189 144 Arts Education Center (non-accredited) 2 1 4 0 Arts Library 5 0 0 3 Performing Arts Center 14 2 2 3
3) DESCRIPTION
IPEDS (Integrated Postsecondary Education Data System) :
1) Values added to IPEDS (where value was missing)using values from online database wh
2) New variable EDU_INST created using IPEDS and used to create frequency table
Percentage of Art Museums and Galleries that are associated with a college or university:
EDU_INST
no 46.899105 yes 53.100895
In Assignment 2, I started to fill-in the unique IPEDS number (see Assignment 2 blog) for each listing in the MUDF dataset for art galleries and museums that were affiliated with a 2 or 4-year accredited college or university where the value was blank. Using this information I created a new variable called EDU_INST that had a “yes” value for all rows that included an IPEDS number and “no” for those that did not. Then I created a frequency table that included the percentage of all art organizations and institutions that were affiliated with an accredited college and university and those that were not. For Assignment 3, I found a few more entries that were affiliated with accredited colleges or universities and I added the appropriate IPEDS number and percentage of art museums or galleries increased slightly.
NAICS (North American Industry Classification System):
1) Values added and re-binned for NAICS and used to create new frequency table using the LOCALE4 and NAICS.
2) Both NAICS and LOCALE4 value names renamed to be more meaningful.
LOCALE4_NAME: Names replaced numbers for LOCALE4 values
1: CITY 2: SUBURB 3: TOWN 4: RURAL
NAICS_BASED_CATEGORY: New names and bins for NAICS values 611210: 2-Year Community College 611310: 4-Year Arts College (accredited) 611610: 4-Year College or University 710000: Art Center or Association 712110: Art Museum or Gallery 611000: Arts Education Center (non-accredited) 519120: Arts Library 711100: Performing Art Center
Frequency table that shows the Locale of all rows in dataset grouped by their NAICS_BASED_CATEGORY:
LOCALE4_NAME CITY RURAL SUBURB TOWN NAICS_BASED_CATEGORY 2-Year Community College 86 152 94 15 4-Year Arts College (accredited) 53 19 22 7 4-Year College or University 474 425 292 121 Art Center or Association 187 53 91 79 Art Museum or Gallery 578 111 189 144 Arts Education Center (non-accredited) 2 1 4 0 Arts Library 5 0 0 3
As I added NAICS values to rows where the value was blank I noticed that the majority of the art centers and associations had no NAICS value. I don’t think there is a NAICS category that includes these types of entities. Since this was one of the main categories I’m interested in, I decided to use “710000” which is the general NAICS category for “Arts, Entertainment, and Recreation”. So, I essentially re-binned this category and a few other NAICS categories into fewer bins for the purposes of this assignment and to provide a very general impression of the information that interests me. Since the values no longer strictly follow NAICS categories, I renamed the variable as “NAICS_BASED_CATEGORY”. One of the most interesting results of this frequency table is the number of 2 and 4-year colleges and universities with art galleries and museums that are in rural areas. I think that these schools, as well as art centers in rural areas, might have more community involvement than larger museums and universities in cities -- unfortunately I’m not sure MUDF has data that will help determine that. But I’d also like to investigate in what part of the United States these rural entities are located and MUDF should be able to help with that.
0 notes