#pycon 2012
Explore tagged Tumblr posts
europythonsociety · 6 years ago
Text
List of EPS Board Candidates for 2019/2020
At this year’s EuroPython Society General Assembly we will vote in a new board of the EuroPython Society for the term 2019/2020.
List of Board Candidates
The EPS bylaws require one chair and 2 - 8 board members. The following candidates have stated their willingness to work on the EPS board. We are presenting them here (in alphabetical order by surname).
Prof. Martin Christen
Teaching Python / using Python for research projects
Martin Christen is a professor of Geoinformatics and Computer Graphics at the Institute of Geomatics at the University of Applied Sciences Northwestern Switzerland (FHNW). His main research interests are geospatial Virtual- and Augmented Reality, 3D geoinformation, and interactive 3D maps. Martin is very active in the Python community. He teaches various Python-related courses and uses Python in most research projects. He organizes the PyBasel meetup - the local Python User Group Northwestern Switzerland. He also organizes the yearly GeoPython conference. He is a board member of the Python Software Verband e.V.
I would be glad to help with EuroPython, to be part of a great team that makes the next edition of EuroPython even better, wherever it will hosted.
Raquel Dou
Linguist / Python enthusiast
Raquel befriended Python in 2013, during her MSc in Evolution of Language and Cognition, where she used Python to model a simple communication system’s evolution over time. She runs a business providing language services and often uses Python to make her work and life easier and more fun.
She was an onsite volunteer in 2018 when EuroPython took place at her doorstep (Edinburgh), and has since been helping with preparations for the 2019 conference in the support and sponsor workgroups.
Anders Hammarquist
Pythonista / Consultant / Software architect
Anders brought Python to Open End (née Strakt), a Python software company focusing on data organization, when we founded it in 2001. He has used Python in various capacities since 1995.
He helped organize EuroPython 2004 and 2005, and has attended and given talks at several EuroPythons since then. He has handled the Swedish financials of the EuroPython Society since 2016 and has served as board member since 2017.
Marc-André Lemburg
Pythonista / CEO / Consultant / Coach
Marc-Andre is the CEO and founder of eGenix.com, a Python-focused project and consulting company based in Germany. He has a degree in mathematics from the University of Düsseldorf. His work with and for Python started in 1994. He became Python Core Developer in 1997, designed and implemented the Unicode support in Python and continued to maintain the Python Unicode implementation for more than a decade. Marc-Andre is a founding member of the Python Software Foundation (PSF) and has served on the PSF Board several times.
In 2002, Marc-Andre was on the executive committee to run the first EuroPython conference in Charleroi, Belgium. He also co-organized the second EuroPython 2003 conference. Since then, he has attended every single EuroPython conference and continued being involved in the workings of the conference organization.
He was elected as board member of the EuroPython Society (EPS) in 2012 and enjoyed the last few years working with the EPS board members on steering the EuroPython conference to the new successful EuroPython Workgroup structures to sustain the continued growth, while maintaining the EuroPython spirit and fun aspect of the conference.
For the EuroPython 2017, 2018 and 2019 editions, Marc-Andre was chair of the EuroPython Society and ran lots of activities around the conference organization, e.g. managing the contracts and budget, helping with sponsors,  the website, setting up the conference app, writing blog posts and many other things that were needed to make EuroPython happen.
Going forward, he would like to intensify work on turning the EPS into an organization which aids the Python adoption in Europe not only by running the EuroPython conference, but also by help build organizer networks and provide financial help to other Python conferences in Europe.
Silvia Uberti
Sysadmin / IT Consultant
She is a Sysadmin with a degree in Network Security, really passionate about technology, traveling and her piano.  
She’s an advocate for women in STEM disciplines and supports inclusiveness of underrepresented people in tech communities.
She fell in love with Python and its warm community during PyCon Italia in 2014 and became a member of EuroPython Sponsor Workgroup in 2017.   She enjoys a lot working in it and wants to help more!
What does the EPS Board do ?
The EPS board runs the day-to-day business of the EuroPython Society, including running the EuroPython conference events. It is allowed to enter contracts for the society and handle any issues that have not been otherwise regulated in the bylaws or by the General Assembly. Most business is handled by email on the board mailing list or the board’s Telegram group, board meetings are usually run as conference calls.
It is important to note that the EPS board is an active board, i.e. the board members are expected to put in a significant amount of time and effort towards the goals of the EPS and for running the EuroPython conference. This usually means at least 100-200h work over a period of one year, with most of this being needed in the last six months before the conference. Many board members put in even more work to make sure that the EuroPython conferences become a success.
Board members are generally expected to take on leadership roles within the EuroPython Workgroups.
Enjoy, – EuroPython Society
0 notes
astrosilverio · 7 years ago
Text
How not to hack Python’s virtual memory
On Friday I decided I wanted to muck around with a Python process's virtual memory, do something simple like change a value from outside the process. There are blog posts describing how to do this and I expected to be up and running and causing havoc after an hour or so of light effort.
I should probably mention that I have no good reason for attempting this. There are good reasons for peeking at a process's memory; for example, I kept getting hits on Julia Evans' ruby profiler blog posts while googling for how macOS even /procs (bear with me, explanation later). If you're writing a profiler, you definitely want to peek at other process' stacks, heaps, VM, etc. I am not doing that. I just have bad ideas.
First steps
I started out with a vague directive ("change something in a Python process from elsewhere") and started investigating naively, before reading any blog posts or digging around on stack overflow. [Not that there is anything wrong with using tools; I just tend to retain information better when I start from first principles]. I had some facts on hand:
Processes use virtual memory, an abstraction in which physical memory addresses (actual locations on your hardware) get mapped to virtual addresses. As far as I understand, this system provides security (as an attacker, I can't meaningfully guess what addresses contain useful information) and simplifying abstractions for the process itself (as a user, you can run multiple processes at once without having to worry that they might both be using hardcoded addresses that overlap).
Python objects have ids that correspond to the objects' addresses in virtual memory.
I can use id(obj) to see where particular objects live in Python's virtual memory space, and I can even use a module in the standard library called ctypes to set particular bytes in memory and therefore modify VM from inside Python (theoretically; I haven't gotten it to work yet). However that's not quite what I want, and I was still curious about how Python allocates virtual memory anyway.
"Aha", I thought. "There must be something in /proc for this." One of my standard debugging techniques is to just ls /proc/$pid and see if I can find anything relevant in there. For the uninitiated, /proc is a pseudo filesystem in Linux that provides interesting system information in a format consistent with the rest of your, real, filesystem. Every process has a "folder" in /proc indexed by process id that contains "files" full of juicy tidbits; try cat /proc/$pid/status for starters.
Dear readers, I am on MacOS. MacOS does not have /proc. I'm honestly a little embarrassed that it took me so long to notice that I do not have /proc. This is the exact moment that I started frantically googling.
MacOS has tools too! and we get an interesting-ish result
Light digging reveals that there is a /proc/$pid/maps that shows you what each block of a process's VM is for, and that the MacOS equivalent is vmmap. Default output of vmmap includes the type of block and a brief description of what it's for; for a Python process, you'll see non-writable blocks of type __TEXT that are labelled with the path to your local copy of Python and its libraries. There are also address ranges, sizes, and a column for "sharing mode" that describes if and how pages are shared between processes. The C code objects used by the standard library, for example, frequently are stored in "copy-on-write" share-mode blocks. The vast majority of the time, you won't be modifying the datetime library and that code can be shared etween processes. However, if you are modifying that library in one process, you would not necessarily want those changes to suddenly appear in another process; the logical solution is to share that code by default but copy if if you write it, hence the name "copy-on-write". It's abbreviated COW which is why it captured my interest in the first place.
Next I messed around and imported libraries and created objects in the repl and looked up the regions where they lived. Even at the edges of my creativity, most of my objects lived in blocks of memory labelled "DefaultMallocZone" of type "MALLOC_" like MALLOC_TINY or MALLOC_LARGE. That they all belonged to "DefaultMallocZone" made sense to me--after all, everything that I was examining lived in the heap, and I would be concerned if the heap was not all contained under the same label. The different size blocks are an optimization enabled by the operating system that facilitates garbage collection. As explained in this post, different size MALLOC regions have different "resolutions"; MALLOC_TINY, for example, quantifies its allocations in units of 16 bytes whereas MALLOC_LARGE has page-level granularity. Finer grained MALLOC regions let you pack tiny objects in more densely and enables finer-grained garbage collection. However, fine-grained garbage collection is a poor choice for larger objects, which will cause a lot of scanning. The upshot for us is that integers end up in MALLOC_TINY regions and functions end up in MALLOC_LARGE regions.
At this point I realized that I was having a lot of fun but had digressed from my mission. Moving forward.
I miss you, /proc
Next, I found a blog post that describes step-by-step a method for modifying Python objects at the VM level. I gave it a skim and learned that there is in fact a /proc tool that will let you look at a process' VM--/proc/$pid/mem. Once the authors find where the variable they want to modify lives, they overwrite it in /proc/$pid/mem.
Well, /proc/$pid/maps has a MacOS equivalent, so now I just have to find the /proc/$pid/mem counterpart, right?
Wrong.
Turns out MacOS is safe or something? There's a Mach (Mac Kernel) function called task_for_pid that "gives you access to a process's port" which seems useful, but also can do bad things to your computer and requires some security work and I don't want to deal with either.
Next steps? Lessons learned?
I crashed and burned at task_for_pid after a day of frustration. I still don't really have a good reason to be messing with Python's VM, but now I am annoyed and need to do something bad. Therefore, the next step is to figure out how to get ctypes.memmove working so that I can at least muck with a Python process from inside it.
While I'll be ultimately dissatisfied if my day's worth of effort does not result in at least some chaos, I have actually learned some useful things about Mac vs Linux, malloc on Mac, and how Python manages memory (see this transcript).
Until next time,
Your local frustrated agent of chaos
0 notes
gjlondon · 8 years ago
Text
Sing, goddess, the rage of George and the ImportError,
and its devastation, which put pains thousandfold upon his programs,
hurled in their multitudes to the house of Hades strong ideas
of systems, but gave their code to be the delicate feasting
of dogs, of all birds, and the will of Guido was accomplished
since that time when first there stood in division of conflict
Brett Cannon’s son the lord of modules: the brilliant import keyword. . .
Backstory Before Things Get Weird
This post is about how an ImportError lead me to a very strange place.
I was writing a simple Python program. It was one of my first attempts at Python 3.
I tried to import some code and got an ImportError. Normally I solve ImportError's by shuffling files around until the error goes away. But this time none of my shuffling solved the problem. So I found myself actually reading the official documentation for the Python import system. Somehow I'd spent over five years writing Python code professionally without ever reading more than snippets of those particular docs.
What I learned there changed me.
Yes, I answered my simple question, which had something to do with when I should use .'s in an import:
# most of the time, don't use dots at all: from spam import eggs # If Python has trouble finding spam and spam.py is in the same directory: # (i.e. "package" as the code doing the import): from .spam import eggs # when spam.py is in the *enclosing* package, i.e. one level up: from ..spam import eggs # ImportError! At least in Python 3, you can only use the dots with # the `from a import b` syntax: import .spam
But more importantly I realized that this whole time I had never really understood what the word module means in Python.
According to the official Python tutorial, a "module" is a file containing Python definitions and statements.
In other words, spam.py is a module.
But it's not quite that simple. In my running Python program, if I import requests, then what is type(requests)?
It's module.
That means module is a type of object in a running Python program. And requests in my running program is derived from requests.py, but it's not the same thing.
So what is the module class in Python and how is babby module formed?
Modules and the Python Import System
Modules are created automatically in Python when you import. It turns out that the import keyword in Python is syntactic sugar for a somewhat more complicated process. When you import requests, Python actually does two things:
1) Calls an internal function: __import__('requests') to create, load, and initialize the requests module object
2) Binds the local variable requests to that module
And how exactly does __import__() create, load, and initialize a module?
Well, it's complicated. I'm not going to go into full detail, but there's a great video where Brett Cannon, the main maintainer of the Python import system, painstakingly walks through the whole shebang.
But in a nutshell, importing in Python has 5 steps:
1. See if the module has already been imported
Python maintains a cache of modules that have already been imported. The cache is a dictionary held at sys.modules.
If you try to import requests, __import__ will first check if there's a module in sys.modules named "requests". If there is, Python just gives you the module object in the cache and does not do any more work.
If the module isn't cached (usually because it hasn't been import yet, but also maybe because someone did something nefarious...) then:
2. Find the source code using sys.path
sys.path is a list in every running Python program that tells the interpreter where it should look for modules when it's asked to import them. Here's an excerpt from my current sys.path:
# the directory our code is running in: '', # where my Python executable lives: '/Users/rogueleaderr/miniconda3/lib/python3.5', # the place where `pip install` puts stuff: '/Users/rogueleaderr/miniconda3/lib/python3.5/site-packages'
When I import requests Python goes and looks in those directories for requests.py. If it can't find it, I'm in for an ImportError. I'd estimate that the large majority of real life ImportError's happen because the source code you're trying to import isn't in a directory that's on sys.path. Move your module or add the directory to sys.path and you'll have a better day.
In Python 3, you can do some pretty crazy stuff to tell Python to look in esoteric places for code. But that's a topic for another day!
3. Make a Module object
Python has a builtin type called ModuleType. Once __import__ has found your source code, it'll create a new ModuleType instance and attach your module.py's source code to it.
Then, the exciting part:
4. Execute the module source code!
__import__ will create a new namespace, i.e. scope, i.e. the __dict__ attribute attached to most Python objects. And then it will actually exec your code inside of that namespace.
Any variables or functions that get defined during that execution are captured in that namespace. And the namespace is attached to the newly created module, which is itself then returned into the importing scope.
5. Cache the module inside sys.modules
If we try to import requests again, we'll get the same module object back. Steps 2-5 will not be repeated.
Okay! This is a pretty cool system. It lets us write many pretty Python programs.
But, if we're feeling demented, it also lets us write some pretty dang awful Python programs.
Where it gets weird
I learned how to fix my immediate import problem. That wasn't enough.
With these new import powers in hand, I immediately starting thinking about how I could use them for evil, rather than good. Because, as we know:
(c. Five Finger Tees)
So far, the worst idea I've had for how to misuse the Python import system is to implement a mergesort algorithm using just the import keyword. At first I didn't know if it was possible. But, spoiler alert, it is!
It doesn't actually take much code. It just takes the stubbornness to figure out how to subvert a lot of very well-intentioned, normally helpful machinery in the import system.
We can do this. Here's how:
Remember that when we import a module, Python executes all the source code.
So imagine I start up Python and define a function:
>>> def say_beep(): >>> print("beep!.........beep!") >>> say_beep()
This will print out some beeps.
Now imagine instead I write the same lines of code as above into a file called say_beep.py. Then I open my interpreter and run
>>> import say_beep.py
What happens? The exact same thing: Python prints out some beeps.
If I create a module that contains the same source code as the body of a function then importing the module will produce the same result as calling the function.
Well, what if I need to return something from my function body? Simple:
# make_beeper.py beeper = lambda x: print("say beep") # main.py from make_beeper import beeper beeper()
Anything that gets defined in the module is available in the module's namespace after it's imported. So from a import b is structurally the same as b = f(), if I structure my module correctly.
Okay, what about passing arguments? Well, that gets a bit harder. The trick is that Python source code is just a long string, so we can modify the source of a module before we import it:
# with_args.py a = None b = None result = a + b # main.py src = "" with open("with_args.py") as f: for line in f: src += line a = "10" b = "21" src = src.replace("a = None", f"a = {a}") src = src.replace("b = None", f"b = {b}") with open("with_args.py", "w") as f: f.write(src) from with_args import result print(result) # it's 31!
Now this certainly isn't pretty. But where we're going, nothing is pretty. Buckle up!
How to mergesort
Okay...how can we apply these ideas to implement mergesort?
First, let's quickly review what mergesort is: it's a recursive sorting algorithm with n log n worst-case computational complexity (meaning it's pretty darn good, especially compared to bad sorting algorithms like bubble sort that have n^2 complexity.)
It works by taking a list, splitting it in half, and then splitting the halves in half until we're left with individual elements.
Then we merge adjacent elements by interleaving them in sorted order. Take a look at this diagram:
Or read the Wikipedia article for more details.
Some rules
No built in sorting functionality. Python's built in sort uses a derivative of mergesort so just putting result = sorted(lst) into a module and importing it isn't very sporting.
No user-defined functions at all.
All the source code has to live inside one module file, which we will fittingly call madness.py
The code
Well, here's the code: (Walk-through below, if you don't feel like reading 100 lines of bizarre Python)
""" # This is the algorithm we'll use: import sys import re import inspect import os import importlib import time input_list = [] sublist = input_list is_leaf = len(sublist) < 2 if is_leaf: sorted_sublist = sublist else: split_point = len(sublist) // 2 left_portion = sublist[:split_point] right_portion = sublist[split_point:] # get a reference to the code we're currently running current_module = sys.modules[__name__] # get its source code using stdlib's `inspect` library module_source = inspect.getsource(current_module) # "pass an argument" by modifying the module's source new_list_slug = 'input_list = ' + str(left_portion) adjusted_source = re.sub(r'^input_list = \[.*\]', new_list_slug, module_source, flags=re.MULTILINE) # make a new module from the modified source left_path = "left.py" with open(left_path, "w") as f: f.write(adjusted_source) # invalidate caches; force Python to do the full import again importlib.invalidate_caches() if "left" in sys.modules: del sys.modules['left'] # "call" the function to "return" a sorted sublist from left import sorted_sublist as left_sorted # clean up by deleting the new module if os.path.isfile(left_path): os.remove(left_path) new_list_slug = 'input_list = ' + str(right_portion) adjusted_source = re.sub(r'^input_list = \[.*\]', new_list_slug, module_source, flags=re.MULTILINE) right_path = "right.py" with open(right_path, "w") as f: f.write(adjusted_source) importlib.invalidate_caches() if "right" in sys.modules: del sys.modules['right'] from right import sorted_sublist as right_sorted if os.path.isfile(right_path): os.remove(right_path) # merge merged_list = [] while (left_sorted or right_sorted): if not left_sorted: bigger = right_sorted.pop() elif not right_sorted: bigger = left_sorted.pop() elif left_sorted[-1] >= right_sorted[-1]: bigger = left_sorted.pop() else: bigger = right_sorted.pop() merged_list.append(bigger) # there's probably a better way to do this that doesn't # require .reverse(), but appending to the head of a # list is expensive in Python merged_list.reverse() sorted_sublist = merged_list # not entirely sure why we need this line, but things # don't work without it! sys.modules[__name__].sorted_sublist = sorted_sublist """ import random import os import time random.seed(1001) list_to_sort = [int(1000*random.random()) for i in range(100)] print("unsorted: {}".format(list_to_sort)) mergesort = __doc__ adjusted_source = mergesort.replace('input_list = []', 'input_list = {}'.format(list_to_sort)) with open("merge_sort.py", "w") as f: f.write(adjusted_source) from merge_sort import sorted_sublist as sorted_list os.remove("merge_sort.py") finished_time = time.time() print("original sorted: {}".format(sorted(list_to_sort))) print("import sorted: {}".format(sorted_list)) assert sorted_list == sorted(list_to_sort)
That's all we need.
Breaking it down
Madness itself
The body of madness.py is compact. All it does is generate a random list of numbers, grab our template implementation of merge sort from it's own docstring (how's that for self-documenting code?), jam in our random list, and kick off the algorithm by running
from merge_sort import sorted_sublist as sorted_list
The mergesort implementation
This is the fun part.
First, here is a "normal" implementation of merge_sort as a function:
def merge_sort(input_list): if len(input_list) < 2: # it's a leaf return input_list else: # split split_point = len(input_list) // 2 left_portion, right_portion = input_list[:split_point], input_list[split_point:] # recursion left_sorted = merge_sort(left_portion) right_sorted = merge_sort(right_portion) # merge merged_list = [] while left_sorted or right_sorted: if not left_sorted: bigger = right_sorted.pop() elif not right_sorted: bigger = left_sorted.pop() elif left_sorted[-1] >= right_sorted[-1]: bigger = left_sorted.pop() else: bigger = right_sorted.pop() merged_list.append(bigger) merged_list.reverse() return merged_list
It has three phases:
Split the list in half
Call merge_sort recursively until the list is split down to individual elements
Merge the sublists we're working on at this stage into a single sorted sublist by interleaving the elements in sorted order
But since our rule says that we can't use functions, we need to replace this recursive function with import.
That means replacing this:
left_sorted = merge_sort(left_portion)
With this:
# get a reference to the code we're currently running current_module = sys.modules[__name__] # get it's source code using stdlib's `inspect` library module_source = inspect.getsource(current_module) # "pass an argument" by modifying the module's source new_list_slug = 'input_list = ' + str(left_portion) adjusted_source = re.sub(r'^input_list = \[.*\]', new_list_slug, module_source, flags=re.MULTILINE) # make a new module from the modified source left_path = "left.py" with open(left_path, "w") as f: f.write(adjusted_source) # invalidate caches importlib.invalidate_caches() if "left" in sys.modules: del sys.modules['left'] # "call" the function to "return" a sorted sublist from left import sorted_sublist as left_sorted # clean up by deleting the new module if os.path.isfile(left_path): os.remove(left_path) # not entirely sure why we need this line, but things # don't work without it! Might be to keep the sorted sublist # alive once this import goes out of scope? sys.modules[__name__].sorted_sublist = sorted_sublist
And that's really it.
We just use the tools we learned about to simulate calling functions with arguments and returning values. And we add a few lines to trick Python into not caching modules and instead doing the full import process when we import a module with the same name as one that's already been imported. (If our merge sort execution tree has multiple levels, we're going to have a lot of different left.py's).
And that's how you abuse the Python import system to implement mergesort.
Many paths to the top of the mountain, but the view is a singleton.
It's pretty mindblowing (to me at least) that this approach works at all. But on the other hand, why shouldn't it?
There's a pretty neat idea in computer science is the Church-Turing thesis. It states that any effectively computable function can be computed by a universal Turing machine. The thesis is usually trotted out to explain why there's nothing you can compute with a universal Turing machine that you can't compute using lambda calculus, and therefore there's no program you can write in C that you can't, in principle, write in Lisp.
But here's a corollary: since you can, if you really want to, implement a Turing tape by writing files to the file system one bit at a time and importing the results, you can use the Python import system to simulate a Turing machine. That implies that, in principle, any computation that can be performed by a digital computer can be performed (assuming infinite space, time, and patience) using the Python import system.
The only real question is how annoying a computation will be to implement, and in this case Python's extreme runtime dynamism makes this particular implementation surprisingly easy.
The Python community spends a lot of time advocating for good methodology and "idiomatic" coding styles. They have a good reason: if you're writing software that's intended to be used, some methods are almost always better than their alternatives.
But if you're writing programs to learn, sometimes it's helpful to remember that there are many different models of computation under the sun. And especially in the era when "deep learning" (i.e. graph-structured computations that simulate differentiable functions) is really starting to shine, it's extra important to remember that sometimes taking a completely different (and even wildly "inefficient") approach to a computational problem can lead to startling success.
It's also nice to remember that Python itself started out as (and in a sense still is!) a ridiculously inefficient and roundabout way to execute C code.
Abstractions really matter. In the words of Alfred North Whitehead,
Civilization advances by extending the number of important operations which we can perform without thinking about them
My "import sort" is certainly not a useful abstraction. But I hope that learning about it will lead you to some good ones!
Note Bene
In case it's not obvious, you should never actually use these techniques in any code that you're intending to actually use for anything.
But the general idea of modifying Python source code at import time has at least one useful (if not necessary advisable) use case: macros.
Python has a library called macropy that implements Lisp-style syntactic macros in Python by modifying your source code at import time.
I've never actually used macropy, but it's pretty cool to know that Python makes the simple things easy and the insane things possible.
Finally, as bad as this mergesort implementation is, it allowed me to run a fun little experiment. We know that mergesort has good computation complexity compared to naive sorting algorithms. But how long does a list have to be before a standard implementation of bubble sort runs slower than my awful import-based implementation of mergesort? It turns out that a list only has to be about 50k items long before "import sort" is faster than bubble sort.
Computational complexity is a powerful thing!
All the code for this post is on Github
0 notes
douchebagbrainwaves · 8 years ago
Text
WORK ETHIC AND SCALE
One founder said this should be your compass. They've tried hard to look middle class. You see it in action, preferably by working at one. In the late 20th century they were synonymous with inefficiency. Good hackers can always get some kind of dreamer who sketched artists' conceptions of rocket ships on the side of being harsh to founders. There are two big forces driving change in startup funding: the multi-week mating dance with acquirers. Google was that type of idea too. I don't think it's fitting that kids should use the whole language. Norton, 2012. The wrong choices. 1300, with number replaced by gender. Fortunately, this flaw should be easy to write a prototype that solves a subset of the problem is that once you have a fairly tolerant advisor, you can succeed by sucking up to the right parties in New York when Giuliani introduced the reforms that made the most money the soonest with the least possible effort.
It was not always this way. It lets you take advantage of it in these terms, but the companies on either side, like Carnegie's steelworks, which made the rails, and Standard Oil, which used railroads to get oil to the East Coast. But I don't recommend this approach to most founders, including many who will go on to start a startup for patent infringement. Most of the things I find missing when I look at the way successful founders have had to use NT voluntarily, multiple times, according to their site. Copying is a good thing. It's not a deal till the money's in the bank so far. Whereas the independence of the townsmen allowed them to keep whatever wealth they created. But I think it is. But once you prove yourself as a good investor has committed, get them to give you enough money to hire people to do tedious work.
That sounds good. I was thinking recently how inconvenient it was not to have to be at least some of them will amount to anything. They had to want it! It was also obvious to us that the most successful startups seem to be of the same curve can be high. I forgot about that. They would have both carrot and stick to motivate them. Kids are less perceptive. Then it's mechanical; phew. Most of the people I worked with were some of my best friends. Numbers stick in people's heads.
Nearly all customers choose the competing product, a downturn in the economy, a delay in getting funding or regulatory approval, a patent suit against a gaming startup called Xfire in 2005. We didn't start it mainly to make money by creating wealth, you could make a fortune without stealing it. Ditto for investors. You sense there is something even better: Live in the future. They're nearly all going to be replaced by what we say no to a Google and there were presumably people in a room deciding to start something. They're already stuck with a seller's market. Since the invention of the quartz movement, an ordinary Timex is more accurate than a Patek Philippe costing hundreds of thousands of dollars to pay your expenses while you develop a prototype.
If we improve your outcome by 10%, you're net ahead, you wouldn't have or shouldn't have done it already. They could sense that the higher you go the fewer instances you find. To the extent there is a natural place for things to give as venture funding becomes more and more fields will see as time goes on. In practice most successful startups generally ride some wave bigger than themselves, it could be because it's more personal and comes earlier in the venture funding market means that the top ten firms live in a world ruled by intelligence. This concept is a simple yes, but it was simpler than they realized. By the end of the scale you have fields like math and physics, where no audience matters except your peers, and judging ability is sufficiently straightforward that hiring and admissions committees can do it without setting off the kind of startup where users come back each day, you've basically built yourself a giant tamagotchi. The most intriguing thing about this software, at the high end. The initial reaction to Y Combinator with a hardware idea, because most startups change their idea anyway. So one thing that may save them to some extent incompatible and admiration is a zero-sum game. A lot of the obstacles to ongoing diagnosis will come from some little startup. I know wrote: Two-firm deals are great. In the past this has not been a 100% indicator of success if only anything were but much better than enterprise software.
Who could make such a pledge will be very popular but from which they don't yet have significant growth, all the way back to high school, I find still have black marks against them in my mind. If investors are vague or resist answering such questions, assume the worst about machines and other people. They had ups and downs. Worrying that you're late. Startup investors work hard to satisfy us. And yet they can hold their own with any work of art is good: they mean it would engage any human. So in practice the medium steers you. Your primary goal is not to stop and think about that. And a particularly overreaching one at that, with fussy tastes and a rigidly enforced house style. It may be like doodling.
Did they explain the long-term potential, but you'll always interrupt working on it? And he'd be right. I swear I didn't prompt this one. For example, Web-based software, they will or at least one of them, and you're thus committing to search for one of the founders, who ought to be writing with conviction. That's what was killing all the potential users, or about people being sued for firing people. While environmental costs should be taken into account, because they're so boringly uniform. You know how there are some smart hackers there they could invest in.
The next best thing to an unmet need that is just what the river does: backtrack. Picking startups is a different business without realizing it, also protecting them from rewards. As it increases the gap in income, but it may be worth standing back and understanding what's going on is not so much because he was one of the principles the IRS uses in deciding whether to major in applied math. A physicist friend recently told me that when he went to work for Apple. For example, by doing a comparatively untargeted launch, and then join some other prestigious institution and work one's way up the hierarchy. If I spent half the day loitering on University Ave, I'd notice. Over and over, and tomorrow you'd have to find a cofounder. We just tried to be the most interesting questions you can ask yourself what would I tell my own kids? That's less the rule now. What Startups Are Really Like October 2009 This essay is derived from a keynote talk at PyCon 2003.
0 notes
europythonsociety · 7 years ago
Text
List of EPS Board Candidates for 2018/2019
At this year’s General Assembly we will vote in a new board of the EuroPython Society for the term 2018/2019.
List of Board Candidates
The EPS bylaws require one chair and 2 - 8 board members. The following candidates have stated their willingness to work on the EPS board. We are presenting them here (in alphabetical order by surname).
Prof. Martin Christen
Teaching Python / using Python for research projects
Martin Christen is a professor of Geoinformatics and Computer Graphics at the Institute of Geomatics at the University of Applied Sciences Northwestern Switzerland (FHNW). His main research interests are geospatial Virtual- and Augmented Reality, 3D geoinformation, and interactive 3D maps. Martin is very active in the Python community. He teaches various Python-related courses and uses Python in most research projects. He organizes the PyBasel meetup - the local Python User Group Northwestern Switzerland. He also organizes the yearly GeoPython conference. He is a board member of the Python Software Verband e.V.
I would be glad to help with EuroPython, to be part of a great team that makes the next edition of EuroPython even better, wherever it will hosted.
Dr. Darya Chyzhyk
PhD / Python programming enthusiastic for research and science
Currently, Darya is a Post-Doc at INRIA Saclay research center, France.
She has a degree in applied mathematics and defended her thesis in computer science. Last 7 years Darya has been working on computer aided diagnostic computer systems for brain diseases at the University of the Basque Country, Spain the University of Florida, USA and she is a member of of the Computational Intelligence Group since 2009. Her aim is to develop computational methods for brain MRI processing and analysis, including open sours tools, that help to the medical people in their specific pathologies research studies.
She has experience in International Conference organization and take part in the events for the teenagers and kids such as Week of science. Participant in more than 10 international science conference, trainings and summer courses.
Board member of Python San Sebastian Society (ACPySS), on-site team of EuroPython 2015 and 2016, EPS board member since 2015.
Artur Czepiel
Pythonista / Web Programmer
Artur started writing in Python around 2008. Since then he used it for fun, profit, and automation. Mostly writing web backends and sysadmin scripts. In last few years slowly expanding that list of use cases with help of data analysis tools like pandas.
At EuroPython 2017 he saw a talk about the EuroPython's codebase and started contributing patches, later joining Web and Support Workgroups. His plan for next year is to write more patches, focusing on how website (and other related software, like helpdesk) can be modified to improve workflows of other WGs.
Anders Hammarquist
Pythonista / Consultant / Software architect
Anders brought Python to Open End (née Strakt), a Python software company focusing on data organization, when we founded it in 2001. He has used Python in various capacities since 1995.
He helped organize EuroPython 2004 and 2005, and has attended and given talks at several EuroPythons since then. He has handled the Swedish financials of the EuroPython Society since 2016 and has served as board member since 2017.
Marc-André Lemburg
Pythonista / CEO / Consultant / Coach
Marc-Andre is the CEO and founder of eGenix.com, a Python-focused project and consulting company based in Germany. He has a degree in mathematics from the University of Düsseldorf. His work with and for Python started in 1994. He became Python Core Developer in 1997, designed and implemented the Unicode support in Python and continued to maintain the Python Unicode implementation for more than a decade. Marc-Andre is a founding member of the Python Software Foundation (PSF) and has served on the PSF Board several times.
In 2002, Marc-Andre was on the executive committee to run the first EuroPython conference in Charleroi, Belgium. He also co-organized the second EuroPython 2003 conference. Since then, he has attended every single EuroPython conference and continued being involved in the workings of the conference organization.
He was elected as board member of the EuroPython Society (EPS) in 2012 and enjoyed the last few years working with the EPS board members on steering the EuroPython conference to the new successful EuroPython Workgroup structures to sustain the continued growth, while maintaining the EuroPython spirit and fun aspect of the conference.
For the EuroPython 2017 and 2018 edition, Marc-Andre was chair of the EuroPython Society and ran lots of activities around the conference organization, e.g. managing the contracts and budget, helping with sponsors,  the website, setting up the conference app, writing blog posts and many other things that were needed to make EuroPython happen.
Going forward, he would like to intensify work on turning the EPS into an organization which aids the Python adoption in Europe not only by running the EuroPython conference, but also by help build organizer networks and provide financial help to other Python conferences in Europe.
Dr. Valeria Pettorino
PhD in physics / Astrophysics / Data Science / Space Missions / Python user
Valeria has more than 12 years experience in research, communication and project management, in Italy/US/Switzerland/Germany/France. Since December 2016 she is permanent research enginner at CEA (Commissariat de l’énergie atomique) in Paris-Saclay. She is part of the international collaborations for the ESA/NASA Planck and Euclid space missions; among other projects, she is leading the forecast Taskforce that predicts Euclid performance. 
She has been using python both in astrophysics (for plotting and data interpretation) and for applications to healthcare IOT. She is alumni of the Science to Data Science (S2DS) program and is passionate about transfer of knowledge between industry and academia. 
She took part to EuroPython 2016 as a speaker and since then helped co-organizing EuroPython 2017 and 2018 in different WGs. She is invited mentor for women in physics for the Supernova Foundation http://supernovafoundation.org/ remote worldwide program.
Mario Thiel
Pythonista
Mario has been helping a lot with EuroPython in recent years, mostly working on supporting attendees through the helpdesk, on-site to make sure setup and tear-down run smoothly and more recently also on the sponsors WG.
Mario will unfortunately not be able to attend EuroPython this year, but would still feel honored to be voted in to the board.
Silvia Uberti
Sysadmin / IT Consultant
She is a Sysadmin with a degree in Network Security, really passionate about technology, traveling and her piano.  
She's an advocate for women in STEM disciplines and supports inclusiveness of underrepresented people in tech communities.
She fell in love with Python and its warm community during PyCon Italia in 2014 and became a member of EuroPython Sponsor Workgroup in 2017.   She enjoys a lot working in it and wants to help more!
What does the EPS Board do ?
The EPS board runs the day-to-day business of the EuroPython Society, including running the EuroPython conference events. It is allowed to enter contracts for the society and handle any issues that have not been otherwise regulated in the bylaws or by the General Assembly. Most business is handled by email on the board mailing list or the board’s Telegram group, board meetings are usually run as conference calls.
It is important to note that the EPS board is an active board, i.e. the board members are expected to put in a significant amount of time and effort towards the goals of the EPS and for running the EuroPython conference. This usually means at least 100-200h work over a period of one year, with most of this being needed in the last six months before the conference. Many board members put in even more work to make sure that the EuroPython conferences become a success.
Board members are generally expected to take on leadership roles within the EuroPython Workgroups.
Enjoy, – EuroPython Society
0 notes