Automatic dependencies management for Python scripts with autopep723

Have you ever wished you could run a Python script without worrying about installing its dependencies first?

We've all been there. You need to hack a small piece of code, maybe your LLM friend gives you a useful Python script you want to try, or you found something on Stack Overflow (God rest its soul) or gist. But if you try to run it directly:

ModuleNotFoundError: No module named 'your_experiment_dependency'

Then begins the bureaucracy of checking which packages you need, with the aggravating factor that many times the name with which it is installed does not correspond to the name that is imported, dealing with version conflicts and overloading your environment.

If everyone agrees that Python is a pragmatic and elegant language that's ideal for scripts, why does this part have to be complicated?

Read more…

Animated SVGs for your CLI program demos

Want to show off how cool your command-line program is with a demo of real-world usage? Want to avoid your demo easily becoming outdated? Want to include the demo on sites that don't allow videos or JavaScript, like GitHub or Pypi README files?

Animated SVGs generated programmatically are the solution! They are embedded like a normal image but are lighter than GIFs, play automatically on the same page, don't require JavaScript, and keep the text sharp regardless of zoom.

Plus, automating your demos gives you total control and lets you programmatically regenerate the visual result when the CLI changes or you want to make changes.

As an example, here's a demo of a little program I made recently called cuitonline which looks like this for its 0.1 version:

How do we do it?

We're going to use the following tools, all open source.

tuterm: It's a bash tool for creating demos of CLI programs. You define a "script" (in the best sense of the word) that lets you control the commands that are executed (for real!) in the console, what comment to show, and what other auxiliary commands to run.

  • uvx: It's the equivalent of pipx for uv. We use it to run asciinema managing dependencies and installing it in a virtual environment automatically.

  • asciinema: Records terminal sessions and saves them in a text format. By default, these files are uploaded and shared through asciinema.org, which is great, but the player requires JavaScript, which limits its use. That's why we convert it to SVG.

  • svg-term-cli: Converts asciinema recordings into animated SVGs.

Step by step

1. Write the script

Following the example of cuitonline shown above. I named the tuterm script usage.tutorial with the following content:

# file: > cuitonline_usage
prompt() {
 echo -ne "\033[1;35m\$ \033[0;33m"
}
configure() {
 DELAY=0.02
 DELAY_PROMPT=0.2
 COLOR_MESSAGE='1;32'
}

run() {
 M "Buscar por CUIT"
 c cuitonline 20-22293909-8
 sleep 3
 M "Buscar por DNI"
 c cuitonline 10433615
 sleep 3
 M "Busca por nombre"
 c cuitonline "messi lionel andres"
 sleep 3
 clear
 M "Obtiene la página 2 de los resultados"
 c cuitonline "juan jose gonzalez" -p2
 sleep 3
}

Lines that start with M are the comments. c are commands that will be seen, and the rest are commands that will be executed but are not shown being typed.

2. Record the terminal session

Use asciinema to record the session while tuterm runs your demo:

uvx asciinema rec --overwrite -c 'tuterm usage.tutorial --mode demo' usage.cast

3. Convert the recording into an animated SVG

With svg-term-cli we define the appearance of the "console" and convert the recording into an animated SVG:

svg-term --window --width 75 --height 24 --padding 1 --in usage.cast --out usage.svg

4. Add the generated SVG to your README.md file

For example, if you upload it to /demo/usage.svg of your repo,

<p align="center">
<img width="90%" src="https://raw.githubusercontent.com/<user>/<repo>/refs/heads/main/demo/usage.svg" />
</p>

And that's it! Now you have an animated, sharp, and programmatically maintainable demo for your CLI program directly in your GitHub README. 😎

We are nerds, we are cheesy

We are nerds, we are cheesy :)

by @tin_nqn_

In [1]:
%matplotlib inline

import matplotlib
matplotlib.rcParams['figure.figsize'] = (16,13)
In [2]:
# Source: http://stackoverflow.com/a/4687582
# Thanks!

from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np

def plot_implicit(fn, bbox=(-1.5,1.5)):
    ''' 
    create a plot of an implicit function
    fn  ...implicit function (plot where fn==0)
    bbox ..the x,y,and z limits of plotted interval
    '''
    xmin, xmax, ymin, ymax, zmin, zmax = bbox*3
    fig = plt.figure()
    ax = fig.add_subplot(111, projection='3d')
    A = np.linspace(xmin, xmax, 100) # resolution of the contour
    B = np.linspace(xmin, xmax, 15) # number of slices
    A1,A2 = np.meshgrid(A,A) # grid on which the contour is plotted

    for z in B: # plot contours in the XY plane
        X,Y = A1,A2
        Z = fn(X,Y,z)
        cset = ax.contour(X, Y, Z+z, [z], zdir='z')
        # [z] defines the only level to plot for this contour for this value of z

    for y in B: # plot contours in the XZ plane
        X,Z = A1,A2
        Y = fn(X,y,Z)
        cset = ax.contour(X, Y+y, Z, [y], zdir='y')

    for x in B: # plot contours in the YZ plane
        Y,Z = A1,A2
        X = fn(x,Y,Z)
        cset = ax.contour(X+x, Y, Z, [x], zdir='x')
    # must set plot limits because the contour will likely extend
    # way beyond the displayed level.  Otherwise matplotlib extends the plot limits
    # to encompass all values in the contour.
    ax.set_zlim3d(zmin,zmax)
    ax.set_xlim3d(xmin,xmax)
    ax.set_ylim3d(ymin,ymax)

    plt.show()
In [3]:
def happy_saint_valentine_ipython(x,y,z):
    return -x**2*z**3 - 9*y**2*z**3/80 + (4*x**2 + 9*y**2 + 4*z**2 - 4)**3/64

plot_implicit(happy_saint_valentine_ipython)
No description has been provided for this image
In [3]:
 

The reStructuredText processor

I always liked to write. As far as I remember, I always do it with a keyboard, in the computer. Except for some doodles when I'm thinking something hard or when I'm bored in a meeting or class, I never write in paper.

In the computer I write a lot, but I not use text processors, neither the open-sources: althought "what you see" is "what you get", very frenquently what I see isn't what I want.

Read more…