AI Integration
Note
The AI integration is experimental, starting with version 25.1.010, and may change in future updates.
- Version 2.25.012: GitHub LLM support, Root URLs changed(!), improved error processing, improved default prompt (mainly for
gpt-4o-mini
).
SmartGit offers optional integration with AI services to enhance its functionality. All AI-based features are disabled by default, ensuring no data is shared without user consent. Users must opt-in and configure these services explicitly.
The AI features in SmartGit do not operate through an AI Assistant like ChatGPT. Instead, SmartGit directly interacts with AI models using their APIs. An API account will be required to use these services.
Commit Message Generation
SmartGit can leverage AI to generate commit messages based on your working tree modifications or staged changes.
This involves transmitting the complete git diff
(or git diff --cached
) to an AI service.
If you are using the Standard Window or have otherwise opted to include untracked changes for commit, the Git diff will be enhanced by your untracked files as well.
Once enabled, you’ll find an AI button with a drop-down menu in the Commit View. This menu lists all configured AI services, indicating the currently active one. Pressing the button or selecting a different AI will send the Git diff to the chosen service, which then generates a commit message and streams it back to SmartGit.
The behavior regarding the AI-generated commit message is as follows:
- If the current commit message is empty or previously generated by AI, the new message will replace it.
- If a manually created message exists, the new AI-generated message will be inserted at the cursor’s current location; this can be useful to augment an existing manually created message with more details generated by the AI.
- Manual changes to the commit message after invoking the AI service, depending on your configuration (see below), may lead to the AI message being buffered rather than displayed directly. A buffered message will cause the AI button icon to blink green, providing options when clicked to proceed with the message.
Options
If you select Continue in Background, SmartGit will allow you to manually modify the commit message without interrupting the AI process. The concurrently generated message will be buffered (see above).
By default, the commit message description is wrapped at 72 characters. Wrapping can be disabled using the Low-level property ai.commitMessageGeneration.wrapDescription
.
Error Handling
Should errors occur during the interaction with the AI, the icon will display a red cross, and additional error details will be provided in a tooltip.
Supported Services
SmartGit supports integration with the following AI services:
Cloud-based Services:
On-premise/Self-hosted Services:
Configuration
You can configure AI settings in your repository’s .git/config
or your global .gitconfig
. A minimal setup looks like this:
[llm ...]
type = ...
model = ...
url = ...
Entry-specific Configuration Options
Each llm
entry has a name that will be used for display on the GUI and can have specific settings:
type (mandatory)
Defines the service type. Available options:
github
openai
anthropic
ollama
model (mandatory)
Specifies the model name as recognized by the service, e.g., o3-mini
for OpenAI’s corresponding model.
url (mandatory)
Indicates the API’s root URL. They are pre-defined for cloud services; verify with your administrator for self-hosted services.
maxDiffSize
Sets the maximum permitted Git diff size for AI submission, defaulting to a conservative value to avoid inadvertently sharing large parts of your codebase. Ensure it remains within the model’s context window size, otherwise parts of your diff won’t be processed and/or the model may return confusing results.
apiKey (partially mandatory)
The API key required to authenticate with the service’s API. API keys are typically necessary for cloud services:
For the GitHub Models, GitHub provides free, rate-limited access to certain models. Once you have set up SmartGit’s GitHub Integration, you can begin using these models with minimal configuration (see below).
debug
Enable logging of communication with the AI by setting debug = true
.
Logs will be saved to SmartGit’s settings directory following a specific naming pattern beginning with ai-
.
promptGenerateCommitMessage
By default, SmartGit sends a pre-defined prompt for the commit message generation which may evolve over time, based on user feedback. This option lets you customize the default AI prompt used for generating commit messages for experimentation or tailored message styles. The prompt must include:
${gitDiff}
- this variable will be substituted with the actual Git diff
parameters
Allows additional model-specific parameters defined in JSON format (see examples below).
Global Configuration Options
Global settings apply to all AI configurations. They are configured directly for the llm
section:
[llm]
option = value
...
autoTransferOptions
Set autoTransferOptions = true
to enable additional, potentially resource-intensive options in the Commit Message button popup:
Submit on Stage
: Automatically invoke AI generation upon staging or unstaging files, minimizing wait times but potentially increasing API usage.
Example Configurations
Below configurations will work out-of-the-box once you have entered your apiKey
.
GitHub gpt-4o-mini
[llm "gh-4o-mini"]
type = github
model = gpt-4o-mini
url = https://models.inference.ai.azure.com
GitHub o3-mini
Note
As of February 2025, advanced models such as
o3-mini
will require a GitHub Copilot Business Account.
[llm "gh-o3-mini"]
type = github
model = o3-mini
url = https://models.inference.ai.azure.com
OpenAI o3-mini
Warning
For version 25.1.011 and older, the root endpoint was just
https://api.openai.com
[llm "o3-mini"]
type = openai
model = o3-mini
url = https://api.openai.com/v1
apiKey = ...
OpenAI o1-mini
Warning
For version 25.1.011 and older, the root endpoint was just
https://api.openai.com
[llm "o1-mini"]
type = openai
model = o1-mini
url = https://api.openai.com/v1
apiKey = ...
OpenAI GPT-4o
Warning
For version 25.1.011 and older, the root endpoint was just
https://api.openai.com
[llm "GPT-4o"]
type = openai
model = gpt-4o
url = https://api.openai.com/v1
apiKey = ...
Anthropic Claude Sonnet 3.5
Warning
For version 25.1.011 and older, the root endpoint was just
https://api.anthropic.com
[llm "Claude 3.5"]
type = anthropic
model = claude-3-5-sonnet-20241022
url = https://api.anthropic.com/v1
apiKey = ...
Advanced Example Configurations
Below configurations provide samples for custom AI services and/or illustration of advanced options. They require specific adjustments to get working configurations.
DeepSeek on Ollama with Debugging
[llm "DeepSeek R1 70B"]
type = ollama
model = deepseek-r1:70b
url = ...
debug = true
Custom Prompt with OpenAI o3-mini
Warning
For version 25.1.011 and older, the root endpoint was just
https://api.openai.com
[llm "o3-mini"]
type = openai
model = o3-mini
url = https://api.openai.com/v1
apiKey = ...
promptGenerateCommitMessage = \
Summarize the following Git diff in one concise sentence:\n\
\n\
Use imperative language.\n\
Provide only the commit message without any explanatory notes.\n\
\n\
${gitDiff}
OpenAI o3-mini “high”
Warning
For version 25.1.011 and older, the root endpoint was just
https://api.openai.com
[llm "o3-mini"]
type = openai
model = o3-mini
url = https://api.openai.com/v1
apiKey = ...
maxDiffSize = 8192
parameters = "{ \"reasoning_effort\" : \"high\" }"