From 2c747402a8da893ae7b23b663138d512c7f1a695 Mon Sep 17 00:00:00 2001 From: Dotta Date: Tue, 17 Mar 2026 09:31:21 -0500 Subject: [PATCH] docs: add PR thinking path guidance to contributing --- CONTRIBUTING.md | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index ab420a24..1eba95e3 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -7,6 +7,7 @@ We really appreciate both small fixes and thoughtful larger changes. ## Two Paths to Get Your Pull Request Accepted ### Path 1: Small, Focused Changes (Fastest way to get merged) + - Pick **one** clear thing to fix/improve - Touch the **smallest possible number of files** - Make sure the change is very targeted and easy to review @@ -16,6 +17,7 @@ We really appreciate both small fixes and thoughtful larger changes. These almost always get merged quickly when they're clean. ### Path 2: Bigger or Impactful Changes + - **First** talk about it in Discord → #dev channel → Describe what you're trying to solve → Share rough ideas / approach @@ -30,12 +32,43 @@ These almost always get merged quickly when they're clean. PRs that follow this path are **much** more likely to be accepted, even when they're large. ## General Rules (both paths) + - Write clear commit messages - Keep PR title + description meaningful - One PR = one logical change (unless it's a small related group) - Run tests locally first - Be kind in discussions 😄 +## Writing a Good PR message + +Please include a "thinking path" at the top of your PR message that explains from the top of the project down to what you fixed. E.g.: + +### Thinking Path Example 1: + +> - Paperclip orchestrates ai-agents for zero-human companies +> - There are many types of adapters for each LLM model provider +> - But LLM's have a context limit and not all agents can automatically compact their context +> - So we need to have an adapter-specific configuration for which adapters can and cannot automatically compact their context +> - This pull request adds per-adapter configuration of compaction, either auto or paperclip managed +> - That way we can get optimal performance from any adapter/provider in Paperclip + +### Thinking Path Example 2: + +> - Paperclip orchestrates ai-agents for zero-human companies +> - But humans want to watch the agents and oversee their work +> - Human users also operate in teams and so they need their own logins, profiles, views etc. +> - So we have a multi-user system for humans +> - But humans want to be able to update their own profile picture and avatar +> - But the avatar upload form wasn't saving the avatar to the file storage system +> - So this PR fixes the avatar upload form to use the file storage service +> - The benefit is we don't have a one-off file storage for just one aspect of the system, which would cause confusion and extra configuration + +Then have the rest of your normal PR message after the Thinking Path. + +This should include details about what you did, why you did it, why it matters & the benefits, how we can verify it works, and any risks. + +Please include screenshots if possible if you have a visible change. (use something like the [agent-browser skill](https://github.com/vercel-labs/agent-browser/blob/main/skills/agent-browser/SKILL.md) or similar to take screenshots). Ideally, you include before and after screenshots. + Questions? Just ask in #dev — we're happy to help. Happy hacking!