We label issues that would be good for a first time contributor as
good first issue.
These usually do not require significant experience with Rust or the Ruff code base.
We label issues that we think are a good opportunity for subsequent contributions as
help wanted.
These require varying levels of experience with Rust and Ruff. Often, we want to accomplish these
tasks but do not have the resources to do so ourselves.
You don't need our permission to start on an issue we have labeled as appropriate for community
contribution as described above. However, it's a good idea to indicate that you are going to work on
an issue to avoid concurrent attempts to solve the same problem.
Please check in with us before starting work on an issue that has not been labeled as appropriate
for community contribution. We're happy to receive contributions for other issues, but it's
important to make sure we have consensus on the solution to the problem first.
Outside of issues with the labels above, issues labeled as
bug are the
best candidates for contribution. In contrast, issues labeled with needs-decision or
needs-design are not good candidates for contribution. Please do not open pull requests for
issues with these labels.
Please do not open pull requests for new features without prior discussion. While we appreciate
exploration of new features, we will often close these pull requests immediately. Adding a
new feature to ruff creates a long-term maintenance burden and requires strong consensus from the ruff
team before it is appropriate to begin work on an implementation.
After cloning the repository, run Ruff locally from the repository root with:
cargorun-pruff--check/path/to/file.py--no-cache
Prior to opening a pull request, ensure that your code has been auto-formatted,
and that it passes both the lint and test validation checks:
cargoclippy--workspace--all-targets--all-features---Dwarnings# Rust lintingRUFF_UPDATE_SCHEMA=1cargotest# Rust testing and updating ruff.schema.jsonuvxpre-commitrun--all-files--show-diff-on-failure# Rust and Python formatting, Markdown and Python linting, etc.
These checks will run on GitHub Actions when you open your pull request, but running them locally
will save you time and expedite the merge process.
If you're using VS Code, you can also install the recommended rust-analyzer extension to get these checks while editing.
Note that many code changes also require updating the snapshot tests, which is done interactively
after running cargo test like so:
cargoinstareview
If your pull request relates to a specific lint rule, include the category and rule code in the
title, as in the following examples:
[flake8-bugbear] Avoid false positive for usage after continue (B031)
[flake8-simplify] Detect implicit else cases in needless-bool (SIM103)
Ruff is structured as a monorepo with a flat crate structure,
such that all crates are contained in a flat crates directory.
The vast majority of the code, including all lint rules, lives in the ruff_linter crate (located
at crates/ruff_linter). As a contributor, that's the crate that'll be most relevant to you.
At the time of writing, the repository includes the following crates:
crates/ruff_linter: library crate containing all lint rules and the core logic for running them.
If you're working on a rule, this is the crate for you.
crates/ruff_benchmark: binary crate for running micro-benchmarks.
crates/ruff_cache: library crate for caching lint results.
crates/ruff_dev: binary crate containing utilities used in the development of Ruff itself (e.g.,
cargo dev generate-all), see the cargo dev section below.
crates/ruff_diagnostics: library crate for the rule-independent abstractions in the lint
diagnostics APIs.
crates/ruff_formatter: library crate for language agnostic code formatting logic based on an
intermediate representation. The backend for ruff_python_formatter.
crates/ruff_index: library crate inspired by rustc_index.
crates/ruff_macros: proc macro crate containing macros used by Ruff.
crates/ruff_notebook: library crate for parsing and manipulating Jupyter notebooks.
crates/ruff_python_ast: library crate containing Python-specific AST types and utilities.
crates/ruff_python_codegen: library crate containing utilities for generating Python source code.
crates/ruff_python_formatter: library crate implementing the Python formatter. Emits an
intermediate representation for each node, which ruff_formatter prints based on the configured
line length.
crates/ruff_python_semantic: library crate containing Python-specific semantic analysis logic,
including Ruff's semantic model. Used to resolve queries like "What import does this variable
refer to?"
crates/ruff_python_stdlib: library crate containing Python-specific standard library data, e.g.
the names of all built-in exceptions and which standard library types are immutable.
At a high level, the steps involved in adding a new lint rule are as follows:
Determine a name for the new rule as per our rule naming convention
(e.g., AssertFalse, as in, "allow assert False").
Create a file for your rule (e.g., crates/ruff_linter/src/rules/flake8_bugbear/rules/assert_false.rs).
In that file, define a violation struct (e.g., pub struct AssertFalse). You can grep for
#[derive(ViolationMetadata)] to see examples.
In that file, define a function that adds the violation to the diagnostic list as appropriate
(e.g., pub(crate) fn assert_false) based on whatever inputs are required for the rule (e.g.,
an ast::StmtAssert node).
Define the logic for invoking the diagnostic in crates/ruff_linter/src/checkers/ast/analyze (for
AST-based rules), crates/ruff_linter/src/checkers/tokens.rs (for token-based rules),
crates/ruff_linter/src/checkers/physical_lines.rs (for text-based rules),
crates/ruff_linter/src/checkers/filesystem.rs (for filesystem-based rules), etc. For AST-based rules,
you'll likely want to modify analyze/statement.rs (if your rule is based on analyzing
statements, like imports) or analyze/expression.rs (if your rule is based on analyzing
expressions, like function calls).
Map the violation struct to a rule code in crates/ruff_linter/src/codes.rs (e.g., B011). New rules
should be added in RuleGroup::Preview.
Update the generated files (documentation and generated code).
To trigger the violation, you'll likely want to augment the logic in crates/ruff_linter/src/checkers/ast.rs
to call your new function at the appropriate time and with the appropriate inputs. The Checker
defined therein is a Python AST visitor, which iterates over the AST, building up a semantic model,
and calling out to lint rule analyzer functions as it goes.
If you need to inspect the AST, you can run cargo dev print-ast with a Python file. Grep
for the Diagnostic::new invocations to understand how other, similar rules are implemented.
Once you're satisfied with your code, add tests for your rule
(see: rule testing), and regenerate the documentation and
associated assets (like our JSON Schema) with cargo dev generate-all.
Finally, submit a pull request, and include the category, rule name, and rule code in the title, as
in:
Like Clippy, Ruff's rule names should make grammatical and logical sense when read as "allow
${rule}" or "allow ${rule} items", as in the context of suppression comments.
For example, AssertFalse fits this convention: it flags assert False statements, and so a
suppression comment would be framed as "allow assert False".
As such, rule names should...
Highlight the pattern that is being linted against, rather than the preferred alternative.
For example, AssertFalse guards against assert False statements.
Not contain instructions on how to fix the violation, which instead belong in the rule
documentation and the fix_title.
Not contain a redundant prefix, like Disallow or Banned, which are already implied by the
convention.
When re-implementing rules from other linters, we prioritize adhering to this convention over
preserving the original rule name.
To test rules, Ruff uses snapshots of Ruff's output for a given file (fixture). Generally, there
will be one file per rule (e.g., E402.py), and each file will contain all necessary examples of
both violations and non-violations. cargo insta review will generate a snapshot file containing
Ruff's output for each fixture, which you can then commit alongside your changes.
Once you've completed the code for the rule itself, you can define tests with the following steps:
Add a Python file to crates/ruff_linter/resources/test/fixtures/[linter] that contains the code you
want to test. The file name should match the rule name (e.g., E402.py), and it should include
examples of both violations and non-violations.
Run Ruff locally against your file and verify the output is as expected. Once you're satisfied
with the output (you see the violations you expect, and no others), proceed to the next step.
For example, if you're adding a new rule named E402, you would run:
Note: Only a subset of rules are enabled by default. When testing a new rule, ensure that
you activate it by adding --select ${rule_code} to the command.
Add the test to the relevant crates/ruff_linter/src/rules/[linter]/mod.rs file. If you're contributing
a rule to a pre-existing set, you should be able to find a similar example to pattern-match
against. If you're adding a new linter, you'll need to create a new mod.rs file (see,
e.g., crates/ruff_linter/src/rules/flake8_bugbear/mod.rs)
Run cargo test. Your test will fail, but you'll be prompted to follow-up
with cargo insta review. Run cargo insta review, review and accept the generated snapshot,
then commit the snapshot file alongside the rest of your changes.
Run cargo test again to ensure that your test passes.
Ruff's user-facing settings live in a few different places.
First, the command-line options are defined via the Args struct in crates/ruff/src/args.rs.
Second, the pyproject.toml options are defined in crates/ruff_workspace/src/options.rs (via the
Options struct), crates/ruff_workspace/src/configuration.rs (via the Configuration struct),
and crates/ruff_workspace/src/settings.rs (via the Settings struct), which then includes
the LinterSettings struct as a field.
These represent, respectively: the schema used to parse the pyproject.toml file; an internal,
intermediate representation; and the final, internal representation used to power Ruff.
To add a new configuration option, you'll likely want to modify these latter few files (along with
args.rs, if appropriate). If you want to pattern-match against an existing example, grep for
dummy_variable_rgx, which defines a regular expression to match against acceptable unused
variables (e.g., _).
Note that plugin-specific configuration options are defined in their own modules (e.g.,
Settings in crates/ruff_linter/src/flake8_unused_arguments/settings.rs coupled with
Flake8UnusedArgumentsOptions in crates/ruff_workspace/src/options.rs).
Finally, regenerate the documentation and generated code with cargo dev generate-all.
The documentation uses Material for MkDocs Insiders, which is closed-source software.
This means only members of the Astral organization can preview the documentation exactly as it
will appear in production.
Outside contributors can still preview the documentation, but there will be some differences. Consult the Material for MkDocs documentation for which features are exclusively available in the insiders version.
To preview any changes to the documentation locally:
# For contributors.uvx--with-requirementsdocs/requirements.txt--mkdocsserve-fmkdocs.public.yml
# For members of the Astral org, which has access to MkDocs Insiders via sponsorship.uvx--with-requirementsdocs/requirements-insiders.txt--mkdocsserve-fmkdocs.insiders.yml
As of now, Ruff has an ad hoc release process: releases are cut with high frequency via GitHub
Actions, which automatically generates the appropriate wheels across architectures and publishes
them to PyPI.
Install uv: curl -LsSf https://astral.sh/uv/install.sh | sh
Run ./scripts/release.sh; this command will:
Generate a temporary virtual environment with rooster
Generate a changelog entry in CHANGELOG.md
Update versions in pyproject.toml and Cargo.toml
Update references to versions in the README.md and documentation
Display contributors for the release
The changelog should then be editorialized for consistency
Often labels will be missing from pull requests they will need to be manually organized into the proper section
Changes should be edited to be user-facing descriptions, avoiding internal details
Additionally, for minor releases:
Move the existing contents of CHANGELOG.md to changelogs/0.MINOR.x.md,
where MINOR is the previous minor release (e.g. 11 when preparing
the 0.12.0 release)
Reverse the entries to put the oldest version first (0.MINOR.0 instead
of 0.MINOR.LATEST as in the main changelog)
Build all the assets. If this fails (even though we tested in step 4), we haven't tagged or
uploaded anything, you can restart after pushing a fix. If you just need to rerun the build,
make sure you're re-running all the failed
jobs and not just a single failed job.
Upload to PyPI.
Create and push the Git tag (as extracted from pyproject.toml). We create the Git tag only
after building the wheels and uploading to PyPI, since we can't delete or modify the tag (#4468).
Attach artifacts to draft GitHub release
Trigger downstream repositories. This can fail non-catastrophically, as we can run any
downstream jobs manually if needed.
Verify the GitHub release:
The Changelog should match the content of CHANGELOG.md
Append the contributors from the scripts/release.sh script
One can determine if an update is needed when
git diff old-version-tag new-version-tag -- ruff.schema.json returns a non-empty diff.
Once run successfully, you should follow the link in the output to create a PR.
If needed, update the ruff-lsp and
ruff-vscode repositories and follow
the release instructions in those repositories. ruff-lsp should always be updated
before ruff-vscode.
This step is generally not required for a patch release, but should always be done
for a minor release.
GitHub Actions will run your changes against a number of real-world projects from GitHub and
report on any linter or formatter differences. You can also run those checks locally via:
We have several ways of benchmarking and profiling Ruff:
Our main performance benchmark comparing Ruff with other tools on the CPython codebase
Microbenchmarks which run the linter or the formatter on individual files. These run on pull requests.
Profiling the linter on either the microbenchmarks or entire projects
Note
When running benchmarks, ensure that your CPU is otherwise idle (e.g., close any background
applications, like web browsers). You may also want to switch your CPU to a "performance"
mode, if it exists, especially when benchmarking short-lived processes.
You can run uv venv --project ./scripts/benchmarks, activate the venv and then run uv sync --project ./scripts/benchmarks to create a working environment for the
above. All reported benchmarks were computed using the versions specified by
./scripts/benchmarks/pyproject.toml on Python 3.11.
To benchmark Pylint, remove the following files from the CPython repository:
Then, from crates/ruff_linter/resources/test/cpython, run: time pylint -j 0 -E $(git ls-files '*.py'). This
will execute Pylint with maximum parallelism and only report errors.
To benchmark Pyupgrade, run the following from crates/ruff_linter/resources/test/cpython:
Ruff uses Criterion.rs for benchmarks. You can use
--save-baseline=<name> to store an initial baseline benchmark (e.g., on main) and then use
--benchmark=<name> to compare against that benchmark. Criterion will print a message telling you
if the benchmark improved/regressed compared to that baseline.
# Run once on your "baseline" codecargobench-pruff_benchmark----save-baseline=main
# Then iterate withcargobench-pruff_benchmark----baseline=main
Use cargo bench -p ruff_benchmark <filter> to only run specific benchmarks. For example: cargo bench -p ruff_benchmark lexer
to only run the lexer benchmarks.
Use cargo bench -p ruff_benchmark -- --quiet for a more cleaned up output (without statistical relevance)
Use cargo bench -p ruff_benchmark -- --quick to get faster results (more prone to noise)
You can either use the microbenchmarks from above or a project directory for benchmarking. There
are a lot of profiling tools out there,
The Rust Performance Book lists some
examples.
You can also use the ruff_dev launcher to run ruff check multiple times on a repository to
gather enough samples for a good flamegraph (change the 999, the sample rate, and the 30, the number
of checks, to your liking)
cargo dev is a shortcut for cargo run --package ruff_dev --bin ruff_dev. You can run some useful
utils with it:
cargo dev print-ast <file>: Print the AST of a python file using Ruff's
Python parser.
For if True: pass # comment, you can see the syntax tree, the byte offsets for start and
stop of each node and also how the : token, the comment and whitespace are not represented
anymore:
cargo dev print-cst <file>: Print the CST of a Python file using
LibCST, which is used in addition to the RustPython parser
in Ruff. For example, for if True: pass # comment, everything, including the whitespace, is represented:
cargo dev generate-all: Update ruff.schema.json, docs/configuration.md and docs/rules.
You can also set RUFF_UPDATE_SCHEMA=1 to update ruff.schema.json during cargo test.
cargo dev generate-cli-help, cargo dev generate-docs and cargo dev generate-json-schema:
Update just docs/configuration.md, docs/rules and ruff.schema.json respectively.
cargo dev round-trip <python file or jupyter notebook>: Read a Python file or Jupyter Notebook,
parse it, serialize the parsed representation and write it back. Used to check how good our
representation is so that fixes don't rewrite irrelevant parts of a file.
cargo dev format_dev: See ruff_python_formatter README.md
If we view Ruff as a compiler, in which the inputs are paths to Python files and the outputs are
diagnostics, then our current compilation pipeline proceeds as follows:
File discovery: Given paths like foo/, locate all Python files in any specified subdirectories, taking into account our hierarchical settings system and any exclude options.
Package resolution: Determine the "package root" for every file by traversing over its parent directories and looking for __init__.py files.
Cache initialization: For every "package root", initialize an empty cache.
Analysis: For every file, in parallel:
Cache read: If the file is cached (i.e., its modification timestamp hasn't changed since it was last analyzed), short-circuit, and return the cached diagnostics.
Tokenization: Run the lexer over the file to generate a token stream.
Indexing: Extract metadata from the token stream, such as: comment ranges, # noqa locations, # isort: off locations, "doc lines", etc.
Token-based rule evaluation: Run any lint rules that are based on the contents of the token stream (e.g., commented-out code).
Filesystem-based rule evaluation: Run any lint rules that are based on the contents of the filesystem (e.g., lack of __init__.py file in a package).
Logical line-based rule evaluation: Run any lint rules that are based on logical lines (e.g., stylistic rules).
Parsing: Run the parser over the token stream to produce an AST. (This consumes the token stream, so anything that relies on the token stream needs to happen before parsing.)
AST-based rule evaluation: Run any lint rules that are based on the AST. This includes the vast majority of lint rules. As part of this step, we also build the semantic model for the current file as we traverse over the AST. Some lint rules are evaluated eagerly, as we iterate over the AST, while others are evaluated in a deferred manner (e.g., unused imports, since we can't determine whether an import is unused until we've finished analyzing the entire file), after we've finished the initial traversal.
Import-based rule evaluation: Run any lint rules that are based on the module's imports (e.g., import sorting). These could, in theory, be included in the AST-based rule evaluation phase — they're just separated for simplicity.
Physical line-based rule evaluation: Run any lint rules that are based on physical lines (e.g., line-length).
Suppression enforcement: Remove any violations that are suppressed via # noqa directives or per-file-ignores.
Cache write: Write the generated diagnostics to the package cache using the file as a key.
Reporting: Print diagnostics in the specified format (text, JSON, etc.), to the specified output channel (stdout, a file, etc.).
To understand Ruff's import categorization system, we first need to define two concepts:
"Project root": The directory containing the pyproject.toml, ruff.toml, or .ruff.toml file,
discovered by identifying the "closest" such directory for each Python file. (If you're running
via ruff --config /path/to/pyproject.toml, then the current working directory is used as the
"project root".)
"Package root": The top-most directory defining the Python package that includes a given Python
file. To find the package root for a given Python file, traverse up its parent directories until
you reach a parent directory that doesn't contain an __init__.py file (and isn't in a subtree
marked as a namespace package); take the directory
just before that, i.e., the first directory in the package.
The project root does not have a significant impact beyond that all relative paths within the loaded
configuration file are resolved relative to the project root.
For example, to indicate that bar above is a namespace package (it isn't, but let's run with it),
the pyproject.toml would list namespace-packages = ["./src/bar"], which would resolve
to my_project/src/bar.
The same logic applies when providing a configuration file via --config. In that case, the
current working directory is used as the project root, and so all paths in that configuration file
are resolved relative to the current working directory. (As a general rule, we want to avoid relying
on the current working directory as much as possible, to ensure that Ruff exhibits the same behavior
regardless of where and how you invoke it — but that's hard to avoid in this case.)
Additionally, if a pyproject.toml file extends another configuration file, Ruff will still use
the directory containing that pyproject.toml file as the project root. For example, if
./my_project/pyproject.toml contains:
[tool.ruff]extend="/path/to/pyproject.toml"
Then Ruff will use ./my_project as the project root, even though the configuration file extends
/path/to/pyproject.toml. As such, if the configuration file at /path/to/pyproject.toml contains
any relative paths, they will be resolved relative to ./my_project.
If a project uses nested configuration files, then Ruff would detect multiple project roots, one for
each configuration file.
The package root is used to determine a file's "module path". Consider, again, baz.py. In that
case, ./my_project/src/foo was identified as the package root, so the module path for baz.py
would resolve to foo.bar.baz — as computed by taking the relative path from the package root
(inclusive of the root itself). The module path can be thought of as "the path you would use to
import the module" (e.g., import foo.bar.baz).
The package root and module path are used to, e.g., convert relative to absolute imports, and for
import categorization, as described below.
When sorting and formatting import blocks, Ruff categorizes every import into one of five
categories:
"Future": the import is a __future__ import. That's easy: just look at the name of the
imported module!
"Standard library": the import comes from the Python standard library (e.g., import os).
This is easy too: we include a list of all known standard library modules in Ruff itself, so it's
a simple lookup.
"Local folder": the import is a relative import (e.g., from .foo import bar). This is easy
too: just check if the import includes a level (i.e., a dot-prefix).
"First party": the import is part of the current project. (More on this below.)
"Third party": everything else.
The real challenge lies in determining whether an import is first-party — everything else is either
trivial, or (as in the case of third-party) merely defined as "not first-party".
There are three ways in which an import can be categorized as "first-party":
Explicit settings: the import is marked as such via the known-first-party setting. (This
should generally be seen as an escape hatch.)
Same-package: the imported module is in the same package as the current file. This gets back
to the importance of the "package root" and the file's "module path". Imagine that we're
analyzing baz.py above. If baz.py contains any imports that appear to come from the foo
package (e.g., from foo import bar or import foo.bar), they'll be classified as first-party
automatically. This check is as simple as comparing the first segment of the current file's
module path to the first segment of the import.
Source roots: Ruff supports a src setting, which
sets the directories to scan when identifying first-party imports. The algorithm is
straightforward: given an import, like import foo, iterate over the directories enumerated in
the src setting and, for each directory, check for the existence of a subdirectory foo or a
file foo.py.
By default, src is set to the project root, along with "src" subdirectory in the project root.
This ensures that Ruff supports both flat and "src" layouts out of the box.