skip to navigation
skip to content

Planet Python

Last update: July 30, 2014 08:49 AM

July 30, 2014


Stéphane Wirtel

EuroPython 2014

What’s EuroPython? EuroPython is the biggest Python event in Europe and the two first editions occurred in Belgium, Charleroi, my hometown.

For this edition, the place was at the Berlin Congress Center aka BCC, from July 21-27.

EuroPython 2014

There is a reason why I like to go to events, you can meet a lot of talented developers and discuss with them and EuroPython is the right place to be.

I was really proud to be at EuroPython for this edition, I could discuss with the organisers and become a member of the EuroPython Society.

If you want to go to the event, you need to buy your ticket because there is a limited number of seats. Ticket bought, at the beginning of July, I received my Badge by mail, on which you will find your full name, the days of the conference, and a free access to the public transport of Berlin. Really appreciated.

Hotel booked, Flight booked and my badge in my pocket, let’s go to Berlin ;-)

Arrived in Berlin, you can take a bus (TXL), the terminus is AlexanderPlatz where my hotel and the BCC are located. 2,60€ for the travel, but with your EuroPython badge, you have a free access, awesome!

On Sunday, I went to the BCC to receive my definitive badge and confirm my presence to the event. Once arrived at the BCC, you discover the building, the event, the volunteers and the team of EuroPython. You know, there is this little word for that "Wouahhhhhh".

Each day of the week, there was a breakfast for the attendees, a lunch and a lot of coffee breaks.

On Monday, first day, once in the BCC, we received a bag with the t-shirt and some discounts, documents, and the programme of the event.

And we have started EuroPython with a lot of talks, the first one was "One year of Snowden, what’s next?" by Constanze Kurz.

As I like to use Erlang for some developments, I have attended to the talk of Bob Ippolito "What can Python learn from Haskell?". This talk has been really useful for the Python community, because we learned that Python is not perfect and we can improve it with the ideas of others projects.

I will not give you the details of each talks, because I was not present for each one, but some talks were really awesome.

I have appreciated the keynote of Pieter Hintjens "Our decentralized future". I have met Pieter at FOSDEM during the Beer Event on Friday. This guy is really awesome, he has a very open mind and you can discuss for hours with him.

With the events sponsored by RhodeCode, or organised by Google, you can meet a lot of talented developers.

For example, on Monday, RhodeCode organised a Hiring event.

And on Tuesday, I was at the Computer Games Museum of Berlin organised by Google.

On Wednesday, there was the dinner of EuroPython, sponsored by RhodeCode.

I remember I was in the main room "C01" and I was wondering "ok, man, it’s not the PythonFOSDEM, it’s more important, how will you organise this kind of event" and at this moment, you understand you have a lot of steps before the creation of your own edition of the EuroPython.

But I started to learn, and now I am a member of the EuroPython Society and I am going to learn from the inside.

A little issue, the protection for the badge, but I think the organisers are already aware of it. Honestly, I could not do better. For me, this edition of EuroPython is a real success.

FYI, this year, there have been 1226 attendees.

I would like to thank the organisers.

Your edition of the EuroPython was really amazing!

Best regards,

Stephane Wirtel,

July 30, 2014 08:46 AM


ShiningPanda

An API to track your requirements

Requires.io helps you keep track of the requirements of your Python projects.

Today we are introducing an API to push your dependency files.

It's very simple to use, and this is definitively the way to go if you are not using GitHub or Bitbucket.

Get started in 4 steps:

  1. Sign Up for an API account
  2. Get your API token
  3. Install the requires.io package from PyPI
  4. Call requires.io on the command line

Typical use case in a build or deployment script would be like:

$ pip install -U requires.io
$ requires.io -a $API_TOKEN -r $REPO_NAME /path/to/my/repo

On the pricing side:

So register now!

July 30, 2014 06:00 AM

July 29, 2014


Martijn Faassen

On Naming In Open Source

Here are some stories on how you can go wrong with naming, especially in open source software.

Easy

Don't use the name "easy" or "simple" in your software as it won't be and people will make fun of it.

Background

People tend to want to use the word 'easy' or 'simple' when things really are not, to describe a facade. They want to paper over immense complexity. Inevitably the facade will be a leaky abstraction, and developers using the software are exposed to it. And now you named it 'easy', when it's anything but not. Just don't give in to the temptation in the first place, and people won't make fun of it.

Examples

easy_install is a Python tool to easily and automatically install Python packages, similar to JavaScript npm or Ruby gems. pip is a more popular tool these days that does the same. easy_install hides, among many other complicated things, a full-fledged web scraper that follows links onto arbitrary websites to find packages. It's "easy" until it fails, and it will fail at one point or another.

SimpleItem is an infamous base class in Zope 2 that pulls in just about every aspect of Zope 2 as mixin classes. It's supposed to make it easy to create a new content type for Zope. The amount of methods made available is truly intimidating and anything but simple.

Demo

Don't use the word "demo" or "sample" in your main codebase or people will depend on it and you will be stuck with it forever.

Background

It's tempting in some library or framework consisting of many parts to want to expose an integrated set of pieces, just as an example, within that codebase itself. Real use of it will of course have the developers integrating those pieces themselves. Except they won't, and now you have people using Sample stuff in real world code.

The word Sample or Demo is fine if the entire codebase is a demo, but it's not fine as part of a larger codebase.

Examples

SampleContainer was a part of Zope 3 that serves as the base class of most actual container subclasses in real world code. It was just supposed to demonstrate how to do the integration.

Rewrite

Don't reuse the name of software for an incompatible rewrite, unless you want people to be confused about it.

Background

Your software has a big installed base. But it's not perfect. You decide to create a new, incompatible version, without a clear upgrade path. Perhaps you handwave the upgrade path "until later", but that then never happens.

Just name the new version something else. Because the clear upgrade path may never materialize, and people will be confused anyway. They will find documentation and examples for the old system if they search for the new one, and vice versa. Spare your user base that confusion.

The temptation to do this is great; you want to benefit from popularity of the name of the old system and this way attract users to the shiny new system. But that's exactly the situation where doing this is most confusing.

Examples

Zope 3: there was already a very popular Zope 2 around, and then we decide to completely rewrite it and named it "Zope 3". Some kind of upgrade path was promised but conveniently handwaved. Immense confusion arose. We then landed pieces of Zope 3 in the old Zope 2 codebase, and it took years to resolve all the confusion.

Company name

If you want a open source community, don't name the software after your company, or your company after the software.

Background

If you have a piece of open source software and you want an open source community of developers for it, then don't name it after your company. You may love your company, but outside developers get a clear indication that "the Acme Platform" is something that is developed by Acme. They know that as outside developers, they will never gain as much influence on the development of that software as developers working at Acme. So they just don't contribute. They go to other open source software that isn't so clearly allied to a single business and contribute there. And you are left to wonder why developers are not attracted to work on your software.

Similarly, you may have great success with an open source project and now want to name your own company after it. That sends a powerful signal of ownership to other stakeholders, and may deter them from contributing.

Of course naming is only a part of what makes an open source project look like something a developer can safely contribute to. But if you get the naming bit wrong, it's hard to get the rest right.

Add the potential entanglement into trademark politics on top of it, and just decide not to do it.

Examples

Examples omitted so I won't get into trouble with anyone.

July 29, 2014 02:37 PM


Machinalis

Embedding Interactive Charts on an IPython Notebook - Part 2

On Part 1 we discussed the basics of embedding JavaScript code into an IPython Notebook, and saw how to use this feature to integrate D3.js charts. On this part we’ll show you how to do the same with Chart.js.

IPython Notebook

This post is also available as an IPython Notebook on github.com

Part 2 - Embedding ChartJS

First we need to declare the requirement using RequireJS:

%%javascript
require.config({
    paths: {
        chartjs: '//cdnjs.cloudflare.com/ajax/libs/Chart.js/0.2.0/Chart.min'
    }
});

The procedure is the same as before, we define a template that will contain the rendered JavaScript code, and we use display to embed the code into the running page.

Chart.js is an HTML5 charting library capable of producing beautiful graphics with very little code. Now we want to plot the male and female population by region, division an state. We’ll use interact with a new callback function called display_chart_chartjs, but this time we don’t need custom widgets as we’re only selecting a single item (we’ll use a DropdownWidget). We’ve also included the show_javascript checkbox.

i = interact(
    display_chart_chartjs,
    sc_est2012_sex=widgets.fixed(sc_est2012_sex),
    region=widgets.fixed(region),
    division=widgets.fixed(division),
    show_javascript=widgets.CheckboxWidget(value=False),
    show=widgets.DropdownWidget(
        values={'By Region':'by_region', 'By Division': 'by_division', 'By State': 'by_state'},
        value='by_region'
    ),
    div=widgets.HTMLWidget(value='<canvas width=800 height=400 id="chart_chartjs"></canvas>')
)
/static/media/uploads/uploads/javascript_charts_3.png

As you can see, the library generates an beautiful and simple animated column chart. There’s not much in terms of customization of Chart.js charts, but that makes it very easy to use.

On the last part (soon), we’ll show you how to embed HighCharts charts.

July 29, 2014 12:01 PM


End Point

Python Subprocess Wrapping with sh

When working with shell scripts written in bash/csh/etc. one of the primary tools you have to rely on is a simple method of piping output and input from subprocesses called by the script to create complex logic to accomplish the goal of the script. When working with python, this same method of calling subprocesses to redirect the input/output is available, but the overhead of using this method in python would be so cumbersome as to make python a less desirable scripting language. In effect you were implementing large parts of the I/O facilities, and potentially even writing replacements for the existing shell utilities that would perform the same work. Recently, python developers attempted to solve this problem, by updating an existing python subprocess wrapper library called pbs, into an easier to use library called sh.

Sh can be installed using pip, and the author has posted some documentation for the library here: http://amoffat.github.io/sh/

Using the sh library

After installing the library into your version of python, there will be two ways to call any existing shell command available to the system, firstly you can import the command as though it was itself a python library:

from sh import hostname
print(hostname())
In addition, you can also call the command directly by just referencing the sh namespace prior to the command name:
import sh
print(sh.hostname())

When running this command on my linux workstation (hostname atlas) it will return the expected results:

Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sh
>>> print(sh.hostname())
atlas

However at this point, we are merely replacing a single shell command which prints output to the screen, the real benefit of the shell scripts was that you could chain together commands in order to create complex logic to help you do work.

Advanced Gymnastics

A common use of shell scripts is to provide administrators the ability to quickly filter log file output and to potentially search for specific conditions within those logs, to alert in the event that an application starts throwing errors. With python piping in sh we can create a simple log watcher, which would be capable of calling anything we desire in python when the log file contains any of the conditions we are looking for.

To pipe together commands using the sh library, you would encapsulate each command in series to create a similar syntax to bash piping:

>>> print(sh.wc(sh.ls("-l", "/etc"), "-l"))
199

This command would have been equivalent to the bash pipe of "ls -l /etc | wc -l" indicating that the long listing of /etc on my workstation contained 199 lines of output. Each piped command is encapsulated inside the parenthesis of the command the precedes it.

For our log listener we will use the tail command along with a python iterator to watch for a potential error condition, which I will represent with the string "ERROR":

>>> for line in sh.tail("-f", "/tmp/test_log", _iter=True):
...     if "ERROR" in line:
...         print line

In this example, once executed, python will call the tail command to follow a particular log file. It will iterate over each line of output produced by tail and if any of the lines contain the string we are watching for python will print that line to standard output. At this point, this would be similar to using the tail command and piping the output to a string search command, like grep. However, you could replace the third line of the python with a more complex action, emailing the error condition out to a developer or administrator for review, or perhaps initiating a procedure to recover from the error automatically.

Conclusions

In this manner with just a few lines of python, much like with bash, one could create a relatively complex process without recreating all the shell commands which already perform this work, or create a convoluted wrapping process of passing output from command to command. This combination of the existing shell commands and the power of python; you get all the functions available to any python environment, with the ease of using the shell commands to do some of the work. In the future I will definitely be using this python library for my own shell scripting needs, as I have generally preferred the syntax and ease of use of python over that of bash, but now I will be able to enjoy both at the same time.

July 29, 2014 12:35 PM


Salim Fadhley

Low-hassle PEP-396

PEP-396 suggests that all python packages should provide a __version__ attribute. This should contain a string in the form x.y.z which gives you the package’s version number. That seems simple enough, except that Python does not provide an obvious way to set this attribute. It would be a tremendous waste of time to set this […]

July 29, 2014 12:07 AM

July 28, 2014


Stéphane Wirtel

How to fix your virtualenv after an upgrade of Python with Homebrew

If you are using Homebrew on OSX and Python and there is a small virtualenv then this article is for you.

After a small upgrade of Python via Brew, you risk to get this error when you want to use the virtualenv.

In my case, the directory where I store the virtualenvs is ~/.virtualenvs

$HOME/.virtualenvs/
    pelican/

Loads the virtualenv.

$HOME/.virtualenvs (0) > source ~/.virtualenvs/pelican/bin/activate.sh

$HOME/.virtualenvs (0) > (pelican) pelican
dyld: Library not loaded: @executable_path/../.Python
  Referenced from: $HOME/.virtualenvs/pelican/bin/python
  Reason: image not found
fish: Job 1, 'pelican' terminated by signal SIGTRAP (Trace or breakpoint trap)

$HOME/.virtualenvs (0) > (pelican) deactivate

In fact, there is a problem with the library, you need to regenerate the links of the Python interpreter. In this case, this is just to remove the existing links and re-execute the virtualenv command.

$HOME/.virtualenvs (0) > find pelican/ -type l
pelican/.Python
pelican/bin/python
pelican/bin/python2
pelican/include/python2.7
pelican/lib/python2.7/_abcoll.py
pelican/lib/python2.7/_weakrefset.py
pelican/lib/python2.7/abc.py
pelican/lib/python2.7/codecs.py
pelican/lib/python2.7/config
pelican/lib/python2.7/copy_reg.py
pelican/lib/python2.7/encodings
pelican/lib/python2.7/fnmatch.py
pelican/lib/python2.7/genericpath.py
pelican/lib/python2.7/lib-dynload
pelican/lib/python2.7/linecache.py
pelican/lib/python2.7/locale.py
pelican/lib/python2.7/ntpath.py
pelican/lib/python2.7/os.py
pelican/lib/python2.7/posixpath.py
pelican/lib/python2.7/re.py
pelican/lib/python2.7/sre.py
pelican/lib/python2.7/sre_compile.py
pelican/lib/python2.7/sre_constants.py
pelican/lib/python2.7/sre_parse.py
pelican/lib/python2.7/stat.py
pelican/lib/python2.7/types.py
pelican/lib/python2.7/UserDict.py
pelican/lib/python2.7/warnings.py

All the symlinks are wrong, and we need to remove them and re-execute the virtualenv command.

$HOME/.virtualenvs (0) > find pelican/ -type l -delete

$HOME/.virtualenvs (0) > virtualenv pelican
Overwriting pelican/lib/python2.7/site.py with new content
Overwriting pelican/lib/python2.7/orig-prefix.txt with new content
New python executable in pelican/bin/python2.7
Also creating executable in pelican/bin/python
Installing setuptools, pip...done.
Overwriting pelican/bin/activate_this.py with new content

Loads the virtualenv ;-)

$HOME/.virtualenvs (0) > source ~/.virtualenvs/pelican/bin/activate.sh

$HOME/.virtualenvs > (pelican) pelican --version
3.5.dev

And now, you have a working virtualenv.

July 28, 2014 10:00 PM


Mike C. Fletcher

GStreamer Level Plugin Monitoring

So you have an audio stream where you'd like to get a human-friendly readout of the current audio level. You add a level component, but how do you actually get the level messages it generates?

            bus.add_signal_watch()
            bus.connect( 'message', self.on_level )

It really seems that you *should* be able to use element.connect(), but there doesn't seem to be an available event to which to connect on the level. So, you wind up having to process all of the events and look for your level event...

    def on_level( self, bus, message ):
        """Level message was received"""
        if message.src == self.monitor and message.type==gst.MESSAGE_ELEMENT:
            self.send( {
                'type':'level',
                'level': message.structure['rms'][0],
            })

Next problem, the "level" is not particularly human friendly. It appears to be a decibel attenuation (from 0 negative), where e.g. -17 seems to be a pretty loud voice and ~-40 is background noise... but that's just what *my* setup produces. Still trying to figure out if there's a formal "what pocketsphinx wants for dictation" definition somewhere. The "vader" element has a value of 0.0078125 as the default for volume cut-off, but no idea what that means.

July 28, 2014 07:33 PM


Logilab

Pylint 1.3 / Astroid 1.2 released

The EP14 Pylint sprint team (more on this here and there) is proud to announce they just released Pylint 1.3 together with its companion Astroid 1.2. As usual, this includes several new features as well and bug fixes. You'll find below some structured list of the changes.

Packages are uploaded to pypi, debian/ubuntu packages should be soon provided by Logilab, until they get into the standard packaging system of your favorite distribution.

Please notice Pylint 1.3 will be the last release branch support python 2.5 and 2.6. Starting from 1.4, we will only support python greater or equal to 2.7. This will be the occasion to do some great cleanup in the code base. Notice this is only about the Pylint's runtime, you should still be able to run Pylint on your Python 2.5 code, through using Python 2.7 at least.

New checks

  • Add multiple checks for PEP 3101 advanced string formatting: 'bad-format-string', 'missing-format-argument-key', 'unused-format-string-argument', 'format-combined-specification', 'missing-format-attribute' and 'invalid-format-index'
  • New 'invalid-slice-index' and 'invalid-sequence-index' for invalid sequence and slice indices
  • New 'assigning-non-slot' warning, which detects assignments to attributes not defined in slots

Improved checkers

  • Fixed 'fixme' false positive (#149)
  • Fixed 'unbalanced-iterable-unpacking' false positive when encountering starred nodes (#273)
  • Fixed 'bad-format-character' false positive when encountering the 'a' format on Python 3
  • Fixed 'unused-variable' false positive when the variable is assigned through an import (#196)
  • Fixed 'unused-variable' false positive when assigning to a nonlocal (#275)
  • Fixed 'pointless-string-statement' false positive for attribute docstrings (#193)
  • Emit 'undefined-variable' when using the Python 3 metaclass= argument. Also fix 'unused-import' false for that construction (#143)
  • Emit 'broad-except' and 'bare-except' even if the number of except handlers is different than 1. Fixes issue (#113)
  • Emit 'attribute-defined-outside-init' for all statements in the same module as the offended class, not just for the last assignment (#262, as well as a long standing output mangling problem in some edge cases)
  • Emit 'not-callable' when calling properties (#268)
  • Don't let ImportError propagate from the imports checker, leading to crash in some namespace package related cases (#203)
  • Don't emit 'no-name-in-module' for ignored modules (#223)
  • Don't emit 'unnecessary-lambda' if the body of the lambda call contains call chaining (#243)
  • Definition order is considered for classes, function arguments and annotations (#257)
  • Only emit 'attribute-defined-outside-init' for definition within the same module as the offended class, avoiding to mangle the output in some cases
  • Don't emit 'hidden-method' message when the attribute has been monkey-patched, you're on your own when you do that.

Others changes

  • Checkers are now properly ordered to respect priority(#229)
  • Use the proper mode for pickle when opening and writing the stats file (#148)

Astroid changes

  • Function nodes can detect decorator call chain and see if they are decorated with builtin descriptors (classmethod and staticmethod).
  • infer_call_result called on a subtype of the builtin type will now return a new Class rather than an Instance.
  • Class.metaclass() now handles module-level __metaclass__ declaration on python 2, and no longer looks at the __metaclass__ class attribute on python 3.
  • Add slots method to Class nodes, for retrieving the list of valid slots it defines.
  • Expose function annotation to astroid: Arguments node exposes 'varargannotation', 'kwargannotation' and 'annotations' attributes, while Function node has the 'returns' attribute.
  • Backported most of the logilab.common.modutils module there, as most things there are for pylint/astroid only and we want to be able to fix them without requiring a new logilab.common release
  • Fix names grabed using wildcard import in "absolute import mode" (i.e. with absolute_import activated from the __future__ or with python 3) (pylint issue #58)
  • Add support in brain for understanding enum classes.

July 28, 2014 03:21 PM

EP14 Pylint sprint Day 2 and 3 reports

https://ep2014.europython.eu/static_media/assets/images/logo.png

Here are the list of things we managed to achieve during those last two days at EuroPython.

After several attempts, Michal managed to have pylint running analysis on several files in parallel. This is still in a pull request (https://bitbucket.org/logilab/pylint/pull-request/82/added-support-for-checking-files-in) because of some limitations, so we decided it won't be part of the 1.3 release.

Claudiu killed maybe 10 bugs or so and did some heavy issues cleanup in the trackers. He also demonstrated some experimental support of python 3 style annotation to drive a better inference. Pretty exciting! Torsten also killed several bugs, restored python 2.5 compat (though that will need a logilab-common release as well), introduced a new functional test framework that will replace the old one once all the existing tests will be backported. On wednesday, he did show us a near future feature they already have at Google: some kind of confidence level associated to messages so that you can filter out based on that. Sylvain fixed a couple of bugs (including https://bitbucket.org/logilab/pylint/issue/58/ which was annoying all the numpy community), started some refactoring of the PyLinter class so it does a little bit fewer things (still way too many though) and attempted to improve the pylint note on both pylint and astroid, which went down recently "thanks" to the new checks like 'bad-continuation'.

Also, we merged the pylint-brain project into astroid to simplify things, so you should now submit your brain plugins directly to the astroid project. Hopefuly you'll be redirected there on attempt to use the old (removed) pylint-brain project on bitbucket.

And, the good news is that now both Torsten and Claudiu have new powers: they should be able to do some releases of pylint and astroid. To celebrate that and the end of the sprint, we published Pylint 1.3 together with Astroid 1.2. More on this here.

July 28, 2014 03:21 PM


Rob Galanakis

Practical Maya Programming with Python is Published

My book, Practical Maya Programming with Python has been finally published! Please check it out and tell me what you think. I hope you will find it sufficiently-but-not-overly opinionated :) It is about as personal as a technical book can get, being distilled from years of mentoring many technical artists and programmers, which is a very intimate experience. It also grows from my belief and observation that becoming a better programmer will, due to all sorts of indirect benefits, help make you a better person.

If you are using Python as a scripting language in a larger application- a game engine, productivity software, 3D software, even a monolithic codebase that no longer feels like Python- there’s a lot of relevant material here about turning those environments into more standard and traditional Python development environments, which give you better tools and velocity. The Maya knowledge required is minimal for much of the book. Wrapping a procedural undo system with context managers or decorators is universal. A short quote from the Preface:

This book is not a reference. It is not a cookbook, and it is not a comprehensive guide to Maya’s Python API. It is a book that will teach you how to write better Python code for use inside of Maya. It will unearth interesting ways of using Maya and Python to create amazing things that wouldn’t be possible otherwise. While there is plenty of code in this book that I encourage you to copy and adapt, this book is not about providing recipes. It is a book to teach skills and enable.

Finally, to those who pre-ordered, I’m truly sorry for all the delays. They’re unacceptable. I hope you’ll buy and enjoy the book anyway. At least I now have a real-world education on the perils of working with the wrong publisher, and won’t be making that same mistake again.

Thanks and happy reading!
Rob Galanakis

July 28, 2014 02:38 PM


Catherine Devlin

auto-generate SQLAlchemy models

PyOhio gave my lightning talk on ddlgenerator a warm reception, and Brandon Lorenz got me thinking, and PyOhio sprints filled my with py-drenaline, and now ddlgenerator can inspect your data and spit out SQLAlchemy model definitions for you:


$ cat merovingians.yaml
-
name: Clovis I
reign:
from: 486
to: 511
-
name: Childebert I
reign:
from: 511
to: 558
$ ddlgenerator --inserts sqlalchemy merovingians.yaml

from sqlalchemy import create_engine, Column, Integer, Table, Unicode
engine = create_engine(r'sqlite:///:memory:')
metadata = MetaData(bind=engine)

merovingians = Table('merovingians', metadata,
Column('name', Unicode(length=12), nullable=False),
Column('reign_from', Integer(), nullable=False),
Column('reign_to', Integer(), nullable=False),
schema=None)

metadata.create_all()
conn = engine.connect()
inserter = merovingians.insert()
conn.execute(inserter, **{'name': 'Clovis I', 'reign_from': 486, 'reign_to': 511})
conn.execute(inserter, **{'name': 'Childebert I', 'reign_from': 511, 'reign_to': 558})
conn.connection.commit()

Brandon's working on a pull request to provide similar functionality for Django models!

July 28, 2014 03:30 PM


PyCon Australia

Workshops, call for Mentors

This year we're trying something new, introductory workshops designed for beginners not attending the conference. It's an attempt by the Python community to reach out to the wider community. All of these workshops are being held at our venue partner, The Edge, part of the State Library of Queensland at South Brisbane.

The full details of the workshops can be found at http://2014.pycon-au.org/programme/workshops

All of our workshops need mentors, please consider signing up as a mentor.

July 28, 2014 01:41 PM


Martijn Faassen

My visit to EuroPython 2014

I had a fun time at EuroPython 2014 in Berlin last week. It was a very well organized conference and I enjoyed meeting old friends again as well as meeting new people. Before I went I was a bit worried with the amount of attendees it'd feel too massive; I had that experience at a PyCon in the US a few years ago. But I was pleasantly surprised it didn't -- it felt like a smaller conference, and I liked it.

Another positive thing that stood out was a larger diversity; there seemed to be more people from central and eastern Europe there than before, and most of all, there were more women. It was underscored by a 13 year old girl giving a lightning talk -- that was just not going to happen at EuroPython 5 years ago.

This is a very positive trend and I hope it continues. I know it takes a lot of work on the part of the organizers to get this far.

I gave a talk at EuroPython myself this year, and I think it went well:

July 28, 2014 12:00 PM


Stefan Behnel

Running coverty scan on lxml

Hearing a talk about static analysis at EuroPython 2014 and meeting Christian Heimes there (CPython core dev and member of the security response team) got us talking about running Coverty Scan on Cython generated code. They provide a free service for Open Source projects, most likely because there is a clear benefit in terms of marketing visibility and distributed filtering work on a large amount of code.

The problem with a source code generator is that you can only run the analyser on the generated code, so you need a real world project that uses the generator. The obvious choice for us was lxml, as it has a rather large code base with more than 230000 lines of C code, generated from some 20000 lines of Cython code. The first run against the latest lxml release got us about 1200 findings, but a quick glance through them showed that the bulk of them were false positives for the way Cython generates code for some Python constructs. There was also a large set of "dead code" findings that I had already worked on in Cython a couple of months ago. It now generates substantially less dead code. So I gave it another run against the current developer versions of both lxml and Cython.

The net result is that the number of findings went down to 428. A large subset of those relates to constant macros in conditions, which is what I use in lxml to avoid a need for C level #ifdefs. The C compiler is happy to discard this code, so Coverty's dead code finding is ok but not relevant. Other large sets of "dead code" findings are due to Cython generating generic error handling code in cases where an underlying C macro actually cannot fail, e.g. when converting a C boolean value to Python's constant True/False objects. So that's ok, too.

There seems to be a bug in the current Coverty analysis that gets confused by repeated C boolean checks, e.g.

((x == NULL) != 0)

Cython generates this additional check at the end to make sure that the integer result of a C boolean test is always 1 or 0 and nothing else. Coverty Scan currently seems to read it as

x == NULL && NULL != 0

and assumes that the result is always false because NULL != 0 is a constant false value.

It's also a bit funny that the tool complains about a single "{" being dead code, although it's followed immediately by a (used) label. That's not really an amount of code that I'd consider relevant for reporting.

On the upside, the tool found another couple of cases in the try-except implementation where Cython was generating dead code, so I was able to eliminate them. The advantage here is that a goto statement can be eliminated, which may leave the target label unused and can thus eliminate further code under that label that would be generated later but now can also be suppressed. Well, and generating less code is generally a good thing anyway.

Overall, the results make a really convincing case for Cython. Nothing of importance was found, and the few minor issues where Cython still generated more code than necessary could easily be eliminated, so that all projects that use the next version can just benefit. Compare that to manually written C extension code, where reference counting is a large source of errors and the verbose C-API of CPython makes the code substantially harder to get right and to maintain than the straight forward Python syntax and semantics of Cython. When run against the CPython code base for the first time, Coverty Scan found several actual bugs and even security issues. This also nicely matches the findings by David Malcolm and his GCC based analysis tool, who ended up using Cython generated code for eliminating false positives, rather than finding actual bugs in it.

July 28, 2014 09:04 AM

July 27, 2014


Wesley Chun

Introduction to Python decorators

In this post, we're going to give you a user-friendly introduction to Python decorators. (The code works on both Python 2 [2.6 or 2.7 only] and 3 so don't be concerned with your version.) Before jumping into the topic du jour, consider the usefulness of the map() function. You've got a list with some data and want to apply some function [like times2() below] to all its elements and get a new list with the modified data:

def times2(x):
    return x * 2

>>> list(map(times2, [0, 1, 2, 3, 4]))
[0, 2, 4, 6, 8]

Yeah yeah, I know that you can do the same thing with a list comprehension or generator expression, but my point was about an independent piece of logic [like times2()] and mapping that function across a data set ([0, 1, 2, 3, 4]) to generate a new data set ([0, 2, 4, 6, 8]). However, since mapping functions like times2()aren't tied to any particular chunk of data, you can reuse them elsewhere with other unrelated (or related) data.

Along similar lines, consider function calls. You have independent functions and methods in classes. Now, think about "mapped" execution across functions. What are things that you can do with functions that don't have much to do with the behavior of the functions themselves? How about logging function calls, timing them, or some other introspective, cross-cutting behavior. Sure you can implement that behavior in each of the functions that you care about such information, however since they're so generic, it would be nice to only write that logging code just once.

Introduced in 2.4, decorators modularize cross-cutting behavior so that developers don'thave to implement near duplicates of the same piece of code for each function. Rather, Python gives them the ability to put that logic in one place and use decorators with its at-sign ("@") syntax to "map" that behavior to any function (or method). This compartmentalization of cross-cutting functionality gives Python an aspect-oriented programming flavor.

How do you do this in Python? Let's take a look at a simple example, the logging of function calls. Create a decorator function that takes a function object as its sole argument, and implement the cross-cutting functionality. In logged() below, we're just going to log function calls by making a call to the print() function each time a logged function is called.

def logged(_func):
    def _wrapped():
        print('Function %r called at: %s' % (
            _func.__name__, ctime()))
        return _func()
    return _wrapped

In logged(), we use the function's name (given by func.__name__) plus a timestamp from time.ctime() to build our output string. Make sure you get the right imports, time.ctime() for sure, and if using Python 2, the print() function:

from __future__ import print_function # 2.6 or 2.7 only
from time import ctime

Now that we have our logged() decorator, how do we use it? On the line above the function which you want to apply the decorator to, place an at-sign in front of the decorator name. That's followed immediately on the next line with the normal function declaration. Here's what it looks like, applied to a boring generic foo() function which just print()s it's been called.

@logged
def foo():
    print('foo() called')

When you call foo(), you can see that the decorator logged()is called first, which then calls foo() on your behalf:

$ log_func.py
Function 'foo' called at: Sun Jul 27 04:09:37 2014
foo() called

If you take a closer look at logged() above, the way the decorator works is that the decorated function is "wrapped" so that it is passed as func to the decorator then the newly-wrapped function _wrapped()is (re)assigned as foo(). That's why it now behaves the way it does when you call it.

The entire script:

#!/usr/bin/env python
'log_func.py -- demo of decorators'

from __future__ import print_function
 # 2.6 or 2.7 only
from time import ctime

def logged(_func):
    def _wrapped():
        print('Function %r called at: %s' % (
              _func.__name__, ctime()))
        return _func()
    return _wrapped

@logged
def foo():
    print('foo() called')

foo()


That was just a simple example to give you an idea of what decorators are. If you dig a little deeper, you'll discover one caveat is that the wrapping isn't perfect. For example, the attributes of foo() are lost, i.e., its name and docstring. If you ask for either, you'll get _wrapped()'s info instead:

>>> print("My name:", foo.__name__) # should be 'foo'!
My name: _wrapped
>>> print("Docstring:", foo.__doc__) # _wrapped's docstring!
Docstring: None

In reality, the "@" syntax is just a shortcut. Here's what you really did, which should explain this behavior:

def foo():
    print('foo() called')

foo = logged(foo) # returns _wrapped (and its attributes)

So as you can tell, it's not a complete wrap. A convenience function that ties up these loose ends is functools.wraps(). If you use it and run the same code, you will get foo()'s info. However, if you're not going to use a function's attributes while it's wrapped, it's less important to do this.

There's also support for additional features, such calling decorated functions with parameters, applying more complex decorators, applying multiple levels of decorators, and also class decorators. You can find out more about (function and method) decorators in Chapter 11 of Core Python Programming or live in my upcoming course which starts in just a few days near the San Francisco airport... there are still a few seats left!

July 27, 2014 05:09 PM


EuroPython Society

Hi, I have a small suggestion for the Bylaws ... on all other pages of the website you abbreviate "EuroPython Society" with "EPS" but in the Bylaws you suddenly change to "EP" (in clauses 1, 2, 14, 17). You may wish to settle for "EPS" everywhere. Hope this helps. BTW, many thanks for all your hard work over the years, it really is very much appreciated.

Thank you for your suggestion. In our General Assembly at EuroPython 2014 we have just voted on updating the bylaws to also use the “EPS” abbreviation.

We will update the website in the coming days.

Enjoy,

EuroPython Society

July 27, 2014 11:30 AM

July 26, 2014


Tarek Ziade

ToxMail experiment

I am still looking for a good e-mail replacement that is more respectful of my privacy.

This will never happen with the existing e-mail system due to the way it works: when you send an e-mail to someone, even if you encrypt the body of your e-mail, the metadata will transit from server to server in clear, and the final destination will store it.

Every PGP UX I have tried is terrible anyways. It's just too painful to get things right for someone that has no knowledge (and no desire to have some) of how things work.

What I aiming for now is a separate system to send and receive mails with my close friends and my family. Something that my mother can use like regular e-mails, without any extra work.

I guess some kind of "Darknet for E-mails" where they are no intermediate servers between my mailbox and my mom's mailbox, and no way for a eavesdropper to get the content.

Ideally:

Project Tox

The Tox Project is a project that aims to replace Skype with a more secured instant messaging system. You can send text, voice and even video messages to your friends.

It's based on NaCL for the crypto bits and in particular the crypto_box API which provides high-level functions to generate public/private key pairs and encrypt/decrypt messages with it.

The other main feature of Tox is its Distributed Hash Table that contains the list of nodes that are connected to the network with their Tox Id.

When you run a Tox-based application, you become part of the Tox network by registering to a few known public nodes.

To send a message to someone, you have to know their Tox Id and send a crypted message using the crypto_box api and the keypair magic.

Tox was created as an instant messaging system, so it has features to add/remove/invite friends, create groups etc. but its core capability is to let you reach out another node given its id, and communicate with it. And that can be any kind of communication.

So e-mails could transit through Tox nodes.

Toxmail experiment

Toxmail is my little experiment to build a secure e-mail system on the top of Tox.

It's a daemon that registers to the Tox network and runs an SMTP service that converts outgoing e-mails to text messages that are sent through Tox. It also converts incoming text messages back into e-mails and stores them in a local Maildir.

Toxmail also runs a simple POP3 server, so it's actually a full stack that can be used through an e-mail client like Thunderbird.

You can just create a new account in Thunderbird, point it to the Toxmail SMPT and POP3 local services, and use it like another e-mail account.

When you want to send someone an e-mail, you have to know their Tox Id, and use TOXID@tox as the recipient.

For example:

7F9C31FE850E97CEFD4C4591DF93FC757C7C12549DDD55F8EEAECC34FE76C029@tox

When the SMTP daemon sees this, it tries to send the e-mail to that Tox-ID. What I am planning to do is to have an automatic conversion of regular e-mails using a lookup table the user can maintain. A list of contacts where you provide for each entry an e-mail and a tox id.

End-to-end encryption, no intermediates between the user and the recipient. Ya!

Caveats & Limitations

For ToxMail to work, it needs to be registered to the Tox network all the time.

This limitation can be partially solved by adding in the SMTP daemon a retry feature: if the recipient's node is offline, the mail is stored and it tries to send it later.

But for the e-mail to go through, the two nodes have to be online at the same time at some point.

Maybe a good way to solve this would be to have Toxmail run into a Raspberry-PI plugged into the home internet box. That'd make sense actually: run your own little mail server for all your family/friends conversations.

One major problem though is what to do with e-mails that are to be sent to recipients that are part of your toxmail contact list, but also to recipients that are not using Toxmail. I guess the best thing to do is to fallback to the regular routing in that case, and let the user know.

Anyways, lots of fun playing with this on my spare time.

The prototype is being built here, using Python and the PyTox binding:

https://github.com/tarekziade/toxmail

It has reached a state where you can actually send and receive e-mails :)

I'd love to have feedback on this little project.

July 26, 2014 11:22 AM


End Point

Python Imports

For a Python project I'm working on, I wrote a parent class with multiple child classes, each of which made use of various modules that were imported in the parent class. A quick solution to making these modules available in the child classes would be to use wildcard imports in the child classes:

from package.parent import *

however, PEP8 warns against this stating "they make it unclear which names are present in the namespace, confusing both readers and many automated tools."

For example, suppose we have three files:

# a.py
import module1
class A(object):
    def __init__():
        pass
# b.py
import module2
class B(A):
    def __init__():
        super(B, self).__init__()
# c.py
class C(B):
    def __init__():
        super(C, self).__init__()

To someone reading just b.py or c.py, it is unknown that module1 is present in the namespace of B and that both module1 and module2 are present in the namespace of C. So, following PEP8, I just explicitly imported any module needed in each child class. Because in my case there were many imports and because it seemed repetitive to have all those imports duplicated in each of the many child classes, I wanted to find out if there was a better solution. While I still don't know if there is, I did go down the road of how imports work in Python, at least for 3.4.1, and will share my notes with you.

Python allows you to import modules using the import statement, the built-in function __import__(), and the function importlib.import_module(). The differences between these are:

The import statement first "searches for the named module, then it binds the results of that search to a name in the local scope" (Python Documentation). Example:

Python 3.4.1 (default, Jul 15 2014, 13:05:56) 
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import re
>>> re
<module 're' from '/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib/python3.4/re.py'>
>>> re.sub('s', '', 'bananas')
'banana'

Here the import statement searches for a module named re then binds the result to the variable named re. You can then call re module functions with re.function_name().

A call to function __import__() performs the module search but not the binding; that is left to you. Example:

>>> muh_regex = __import__('re')
>>> muh_regex
<module 're' from '/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib/python3.4/re.py'>
>>> muh_regex.sub('s', '', 'bananas')
'banana'

Your third option is to use importlib.import_module() which, like __import__(), only performs the search:

>>> import importlib
>>> muh_regex = importlib.import_module('re')
>>> muh_regex
<module 're' from '/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib/python3.4/re.py'>
>>> muh_regex.sub('s', '', 'bananas')
'banana'

Let's now talk about how Python searches for modules. The first place it looks is in sys.modules, which is a dictionary that caches previously imported modules:

>>> import sys
>>> 're' in sys.modules
False
>>> import re
>>> 're' in sys.modules
True
>>> sys.modules['re']
&ltmodule 're' from '/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib/python3.4/re.py'>

If the module is not found in sys.modules Python searches sys.meta_path, which is a list that contains finder objects. Finders, along with loaders, are objects in Python's import protocol. The job of a finder is to return a module spec, using method find_spec(), containing the module's import-related information which the loader then uses to load the actual module. Let's see what I have in my sys.meta_path:

>>> sys.meta_path
[<class '_frozen_importlib.BuiltinImporter'>, <class '_frozen_importlib.FrozenImporter'>, <class '_frozen_importlib.PathFinder'>]

Python will use each finder object in sys.meta_path until the module is found and will raise an ImportError if it is not found. Let's call find_spec() with parameter 're' on each of these finder objects:

>>> sys.meta_path[0].find_spec('re')
>>> sys.meta_path[1].find_spec('re')
>>> sys.meta_path[2].find_spec('re')
ModuleSpec(name='re', loader=_frozen_importlib.SourceFileLoader object at 0x7ff7eb314438>, origin='/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib/python3.4/re.py')

The first finder knows how to find built-in modules and since re is not a built-in module, it returns None.

>>> 're' in sys.builtin_module_names
False

The second finder knows how to find frozen modules, which re is not. The third knows how to find modules from a list of path entries called an import path. For re the import path is sys.path but for subpackages the import path can be the parent's __path__ attribute.

>>>sys.path
['', '/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib/python3.4/site-packages/distribute-0.6.49-py3.4.egg', '/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib', '/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib/python34.zip', '/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib/python3.4', '/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib/python3.4/plat-linux', '/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib/python3.4/lib-dynload', '/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib/python3.4/site-packages', '/home/miguel/.pythonbrew/pythons/Python-3.4.1/lib/python3.4/site-packages/setuptools-0.6c11-py3.4.egg-info']

Once the module spec is found, the loading machinery takes over. That's as far as I dug but you can read more about the loading process by reading the documentation.

July 26, 2014 12:33 AM

July 25, 2014


Geert Vanderkelen

MySQL Connector/Python v2.0.0 alpha

A new major version of Connector/Python is available: v2.0.0 alpha has been been released and is available for download! As with any alpha-software, it’s probably not good to throw it in production just yet.

Our manual has the full change log but here’s an overview of most important changes for this relase.

Some incompatibilities

The world evolves, at least the software does, and Python is not different. I’m not as bold as the guys at Django who dropped support of Python v2.6 with the Django v1.7 release. I’m leaving it in because I’m nice.

Supported Python: 2.6 and 2.7 and 3.3 and 3.4

We do not support any longer Python 3.1 and 3.2. One of the reasons is that we consolidated the code bases for both major Python versions, and the unicode syntax brought back in 3.3 was a blessing (for example, u’パイソン’).

Raw Cursors Return bytearray Objects

Since we consolidated the code bases for Python 2 and 3, we needed make the behaviour as much as possible the same between the two. It’s not easy with Unicode strings, but with bytes we have the bytearray type. Raw cursors will return them instead of strings in Python 2 and bytes in Python 3.

If you want to have previous behaviour back, you can inherit from MySQLCursorRaw and change some methods. Please leave comments if you’d like an example for this.

LOAD LOCAL DATA INFILE  On by Default

In Connector/Python v1.x you needed to set the client flags to enable the LOAD LOCAL DATA INFILE on the client side. Here an example:

# Connector/Python v1.2
import mysql.connector
from mysql.connector import ClientFlag
cnx = mysql.connector.connect(.. , client_flags=[-ClientFlag.[LOCAL_FILES])

Now in Connector/Python v2.0 it is on. However, some people might not like it so there is a switch to disable it:

# Connector/Python v2.0
import mysql.connector
cnx = mysql.connector.connect(.. , allow_local_infile=False)

Note that you still need to make sure that the MySQL Server is configured to allow this statement.

New Cursors: dict and namedtuple

At last, we have cursors which return rows as dictionaries or named tuples. PEP-249 does not define these since not all database systems might return the columns in a case insensitive or sensitive way.

But this is MySQL.

Here is an example how to use cursor returning dictionaries:

query = (
    "SELECT TABLE_NAME, TABLE_ROWS "
    "FROM INFORMATION_SCHEMA.TABLES "
    " WHERE TABLE_SCHEMA='mysql' ORDER BY TABLE_NAME"
)
cur = cnx.cursor(dictionary=True)
cur.execute(query)
for row in cur:
    print("{TABLE_NAME:>30s} {TABLE_ROWS}".format(**row))

That’s far less code for something simple. Each row would look this:

    {u'TABLE_NAME': u'user', u'TABLE_ROWS': 11}

If you like named tuples better, you can do the same, simply giving the named_tuple argument.

cur = cnx.cursor(named_tuple=True)
cur.execute(query)
for row in cur:
    if row.TABLE_ROWS > 0:
        print("{name:>30s} {rows}".format(
            name=row.TABLE_NAME,
            rows=row.TABLE_ROWS))

You can also combine it with the raw=True argument to have raw cursors.

Options Files Support Added

Option files can now be read so you don’t have to have all these connection arguments repeated everywhere in your source code. There are lots of ways to do it this, but we needed to be able to read and support the MySQL options files read by client tools and server.

import mysql.connector
cnx = mysql.connector.connect(options_files='/etc/mysql/connectors.cnf')

By default we do not read any file. You have to explicitly specify which files and in which order have to be read. The options groups that are read are client and connector_python. You can also override this and specify which particular group(s) using the argument option_groups.

And more..

Useful links

July 25, 2014 06:56 PM


Europython

EuroPython 2014 Feedback

Now that EuroPython 2014 is almost over, we would like to say a

BIG THANK YOU

to the local organizers in Berlin! You did a wonderful job with the conference organization.

Please provide feedback

Going forward, we would like to ask all EuroPython attendees to send us your feedback for EuroPython 2014, so we can use this information to plan for EuroPython 2015.

Please use our

EuroPython 2014 Feedback Form

for sending us your feedback.

Helping with EuroPython 2015

If you would like to help with EuroPython 2015, we invite you to join the EuroPython Society. Membership is free. Just go to our application page and enter your details.

In the coming months, we will start the discussions about the new work group model we’ve announced here at the conference.

Thanks to all EuroPython attendees

Thank you very much for attending and have a safe trip home.

We’re all looking forward to seeing you again for EuroPython 2015.

Enjoy,

EuroPython Society

July 25, 2014 03:50 PM


EuroPython Society

EuroPython 2014 Feedback

Now that EuroPython 2014 is almost over, we would like to say a

BIG THANK YOU

to the local organizers in Berlin! You did a wonderful job with the conference organization.

Please provide feedback

Going forward, we would like to ask all EuroPython attendees to send us your feedback for EuroPython 2014, so we can use this information to plan for EuroPython 2015.

Please use our

EuroPython 2014 Feedback Form

for sending us your feedback.

Helping with EuroPython 2015

If you would like to help with EuroPython 2015, we invite you to join the EuroPython Society. Membership is free. Just go to our application page and enter your details.

In the coming months, we will start the discussions about the new work group model we’ve announced here at the conference.

Thanks to all EuroPython attendees

Thank you very much for attending and have a safe trip home.

We’re all looking forward to seeing you again for EuroPython 2015.

Enjoy,

EuroPython Society

July 25, 2014 03:46 PM


Rob Galanakis

goless 0.7 released, with Python3 support and bug fixes

goless version 0.7.0 is out on PyPI. goless facilitates writing Go language style concurrent programs in Python, including functionality for channels, select, and goroutines.

I forgot to blog about 0.6 at the start of July, which brought Python 3.3 and 3.4 support to goless (#17). I will support pypy3 as soon as Travis supports it.

Version 0.7 includes:
- A “fix” for a gevent problem on Windows (socket must be imported!). #28
- Errors in the case of a deadlock will be more informative. For example, if the last greenlet/tasklet tries to do a blocking send or recv, a DeadlockError will be raised, instead of the underlying error being raised. #25
- goless now has a small exception hierarchy instead of exposing the underlying errors.
- Better PyPy stackless support. #29
- goless.select can be called with (case1, case2, case3), etc., in addition to a list of cases (ie, ([case1, case2, case3])). #22

Thanks to Michael Az for several contributions to this release.

Happy concurrent programming!

July 25, 2014 02:10 PM


PyCon Australia

Final weekend for Dinner tickets

We're passing along numbers to our venue for catering on Monday, so this is your last weekend to grab tickets for our conference dinner! As well as fine food and company, we have an important speaker lined up, Paul Gampe.

Get along to the registration page and be sure to select dinner as one of the options.

We'd also like to know if you're coming along to the sprints so we can cater for them appropriately, please head along to our handy-dandy form and let us know!

July 25, 2014 12:14 PM


AppNeta Blog

Faking the Funk: Mocking External Services in Python Tests

In this day and age, it’s difficult to build an application that does not rely on some type of external service. Whether the service is handling user identity, analyzing interesting data, or hurling RESTful insults, you have to accept the fact that you now have a dependency on something you do not control. One place […]

July 25, 2014 08:00 AM