PentestGPT: LLM-based penetration testing tool
AI-assisted Python framework for automated security testing.
Learn more about PentestGPT
PentestGPT is a Python-based penetration testing tool that leverages large language models to automate and assist security testing tasks. The tool interfaces with multiple LLM providers through a unified API, allowing users to select from cloud-based models (GPT-4o, Gemini, Deepseek) or run models locally using Ollama. It provides command-line interfaces for reasoning and parsing tasks, with configurable logging and base URL settings for different deployment scenarios. The tool is designed for security professionals to integrate AI-assisted analysis into penetration testing workflows.
Multi-provider LLM support
Supports OpenAI, Google Gemini, Deepseek, and local Ollama models through a unified interface, allowing users to choose between cloud and local deployment options based on privacy and capability requirements.
Local model capability
Includes integration with Ollama for running models locally, enabling offline operation and privacy-focused deployments without reliance on external API services.
Modular reasoning and parsing
Separates reasoning and parsing tasks into configurable components, allowing different LLM models to be used for different stages of the penetration testing workflow.
from pentestgpt import PentestGPT
pentester = PentestGPT(reasoning_model="gpt-4o")
# Analyze a security finding
response = pentester.reason(
"I found an open port 22 with SSH service. What should I test next?"
)
print(response)Adds OpenAI API compatibility layer and native GPT-4o model support; release notes do not specify breaking changes or migration steps.
- –Integrate OpenAI-compatible endpoints to enable drop-in replacement for custom or third-party LLM providers.
- –Use GPT-4o model by selecting it in configuration; no details provided on required API version or feature differences.
Maintenance release with bug fixes, dependency upgrades via Poetry, and new vision model support; release notes do not specify breaking changes.
- –Set OPENAI_BASEURL environment variable to customize API endpoints; fixes applied for key binding and default model selection.
- –Enable vision model capabilities and Gemini integration; GPT4all now works with default setup after configuration fixes.
Adds local LLM support with custom API endpoints; fixes unspecified bug from v0.9.0.
- –Review examples in pentestgpt/utils/APIs to configure custom LLM endpoints for local models.
- –Release notes do not specify the bug fixed in v0.9.1; check commit history if upgrading from v0.9.0.
Related Repositories
Discover similar tools and frameworks used by developers
EasyOCR
PyTorch OCR library using CRAFT and CRNN models.
paperless-ngx
Self-hosted OCR document archive with ML classification.
llama.cpp
Quantized LLM inference with hardware-accelerated CPU/GPU backends.
context7
MCP server delivering version-specific library documentation to LLMs.
cai
LLM-powered Python framework for automated penetration testing workflows.