Navigate:
All ReposPentestGPT
~$PENTES0.5%

PentestGPT: LLM-based penetration testing tool

AI-assisted Python framework for automated security testing.

LIVE RANKINGS • 06:52 AM • STEADY
TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50
OVERALL
#26
56
AI & ML
#16
25
30 DAY RANKING TREND
ovr#26
·AI#16
STARS
10.9K
FORKS
1.7K
DOWNLOADS
216.9K
7D STARS
+58
7D FORKS
+18
Tags:
See Repo:
Share:

Learn more about PentestGPT

PentestGPT is a Python-based penetration testing tool that leverages large language models to automate and assist security testing tasks. The tool interfaces with multiple LLM providers through a unified API, allowing users to select from cloud-based models (GPT-4o, Gemini, Deepseek) or run models locally using Ollama. It provides command-line interfaces for reasoning and parsing tasks, with configurable logging and base URL settings for different deployment scenarios. The tool is designed for security professionals to integrate AI-assisted analysis into penetration testing workflows.


1

Multi-provider LLM support

Supports OpenAI, Google Gemini, Deepseek, and local Ollama models through a unified interface, allowing users to choose between cloud and local deployment options based on privacy and capability requirements.

2

Local model capability

Includes integration with Ollama for running models locally, enabling offline operation and privacy-focused deployments without reliance on external API services.

3

Modular reasoning and parsing

Separates reasoning and parsing tasks into configurable components, allowing different LLM models to be used for different stages of the penetration testing workflow.


from pentestgpt import PentestGPT

pentester = PentestGPT(reasoning_model="gpt-4o")

# Analyze a security finding
response = pentester.reason(
    "I found an open port 22 with SSH service. What should I test next?"
)

print(response)

vv0.14.0

Adds OpenAI API compatibility layer and native GPT-4o model support; release notes do not specify breaking changes or migration steps.

  • Integrate OpenAI-compatible endpoints to enable drop-in replacement for custom or third-party LLM providers.
  • Use GPT-4o model by selecting it in configuration; no details provided on required API version or feature differences.
vv0.13.0

Maintenance release with bug fixes, dependency upgrades via Poetry, and new vision model support; release notes do not specify breaking changes.

  • Set OPENAI_BASEURL environment variable to customize API endpoints; fixes applied for key binding and default model selection.
  • Enable vision model capabilities and Gemini integration; GPT4all now works with default setup after configuration fixes.
vv0.9.1

Adds local LLM support with custom API endpoints; fixes unspecified bug from v0.9.0.

  • Review examples in pentestgpt/utils/APIs to configure custom LLM endpoints for local models.
  • Release notes do not specify the bug fixed in v0.9.1; check commit history if upgrading from v0.9.0.


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers