5 examples

Orphaned process

Process continues running after parent process ends, consuming resources.

[ FAQ1 ]

What is an orphaned process?

An orphaned process happens when a parent process ends before its child process, leaving the child running without its original parent. On Linux systems, orphaned child processes are automatically adopted by the init or systemd process (usually PID 1), allowing them to continue running in the background. On Windows, orphaned processes also continue running but without a parent-child relationship, making process management and cleanup more challenging. Although orphaned processes do not inherently consume excessive resources, their unintended presence can signal design flaws, resource leaks, or inefficient handling of processes within software applications.
[ FAQ2 ]

How to fix orphaned processes

Fixing orphaned processes typically involves ensuring the parent process properly waits for or terminates its child processes before exiting. On Linux, you can explicitly manage child processes using system calls such as wait() or waitpid(), ensuring all child processes complete before the parent terminates. If a parent process has already exited, manually terminating or restarting orphaned child processes may be necessary using commands like kill or killall. On Windows, use task management tools (Task Manager) or command-line utilities (taskkill) to manually terminate orphaned processes. Implementing robust error handling, process monitoring, and proper cleanup routines in your application code helps prevent orphaned processes from occurring in the first place.
diff block
+name: Web App E2E Testing
+
+on:
+ pull_request:
+ branches:
+ - main
+ paths:
+ - 'web/**'
+ - '.github/workflows/web-testing.yml'
+ - '.github/actions/setup-test-environment/action.yml' # Rerun if common setup changes
+ - '.github/actions/stop-supabase/action.yml'
+
+jobs:
+ test:
+ runs-on: blacksmith-16vcpu-ubuntu-2204
+
+ # Service container for Redis (needed by the setup action and potentially API)
+ services:
+ redis:
+ image: redis
+ ports:
+ - 6379:6379
+ options: >-
+ --health-cmd "redis-cli ping"
+ --health-interval 10s
+ --health-timeout 5s
+ --health-retries 5
+
+ steps:
+ - name: Checkout code
+ uses: actions/checkout@v4
+
+ - name: Set up Node.js # Still needed for frontend build/test commands
+ uses: actions/setup-node@v4
+ with:
+ node-version: '20'
+
+ - name: Setup Test Environment
+ id: setup_env # Give an ID to reference outputs
+ uses: ./.github/actions/setup-test-environment
+
+ # Build/Run/Wait steps remain for web testing as it needs the API server running
+ - name: Build API Server
+ working-directory: ./api
+ run: cargo build --release
+ env:
+ # Potentially needed if build process requires env vars, though unlikely
+ DATABASE_URL: ${{ steps.setup_env.outputs.database-url }}
+
+ - name: Run API Server
+ working-directory: ./api
+ run: |
+ ./target/release/server & # Run in background
+ echo $! > /tmp/api-server.pid # Store PID for later cleanup
Greptile
greptile
logic: API server PID file is created but never used for cleanup. Could lead to orphaned processes.
suggested fix
./target/release/server & # Run in background
echo $! > /tmp/api-server.pid # Store PID for later cleanup
+ trap 'kill $(cat /tmp/api-server.pid)' EXIT # Ensure cleanup on exit
diff block
+#!/bin/sh
+
+log_file="${ERROR_LOG_PATH:-/logs/error.log}"
+error_pattern="${ERROR_PATTERN:-ERROR}"
+TARGET=2
+
+# start the guardian node
+guardiand \
+ transfer-verifier \
+ evm \
+ --rpcUrl ws://eth-devnet:8545 \
+ --coreContract 0xC89Ce4735882C9F0f0FE26686c53074E09B0D550 \
+ --tokenContract 0x0290FB167208Af455bB137780163b7B7a9a10C16 \
+ --wrappedNativeContract 0xDDb64fE46a91D46ee29420539FC25FD07c5FEa3E \
+ --logLevel=info \
+ 2> /tmp/error.log &
Greptile
greptile
style: No trap handler for background guardiand process, could leave orphaned process on script exit
suggested fix
+# Set up trap to kill background process on exit
+trap 'kill $(jobs -p) 2>/dev/null' EXIT
guardiand \
transfer-verifier \
evm \
--rpcUrl ws://eth-devnet:8545 \
--coreContract 0xC89Ce4735882C9F0f0FE26686c53074E09B0D550 \
--tokenContract 0x0290FB167208Af455bB137780163b7B7a9a10C16 \
--wrappedNativeContract 0xDDb64fE46a91D46ee29420539FC25FD07c5FEa3E \
--logLevel=info \
2> /tmp/error.log &
diff block
+import { showHUD } from "@raycast/api";
+import { spawn } from "child_process";
+
+export default async function Command() {
+ await showHUD("Rebooting Raycast...");
+
+ await new Promise((resolve) => setTimeout(resolve, 1500)); // 1.5 seconds
+
+ const subprocess = spawn(
+ "/bin/bash",
+ [
+ "-c",
+ `
+ sleep 0.5;
+ open -a "Raycast"
+ `,
+ ],
+ {
+ detached: true,
+ stdio: "ignore",
+ },
+ );
+
+ subprocess.unref();
+
+ spawn("killall", ["Raycast"]);
Greptile
greptile
logic: Killing Raycast after spawning the relaunch subprocess could leave orphaned processes if kill fails. Consider reversing the order: kill first, then spawn relaunch.
diff block
+name: Web App E2E Testing
+
+on:
+ pull_request:
+ branches:
+ - main
+ paths:
+ - 'web/**'
+ - '.github/workflows/web_testing.yml' # Also run if the workflow file itself changes
+
+jobs:
+ test:
+ runs-on: blacksmith-16vcpu-ubuntu-2204 # Using a powerful runner as requested
+
+ # Service container for Redis
+ services:
+ redis:
+ image: redis
+ ports:
+ - 6379:6379
+ options: >-
+ --health-cmd "redis-cli ping"
+ --health-interval 10s
+ --health-timeout 5s
+ --health-retries 5
+
+ steps:
+ - name: Checkout code
+ uses: actions/checkout@v4
+
+ - name: Set up Node.js # Assuming frontend tests use Node
+ uses: actions/setup-node@v4
+ with:
+ node-version: '20' # Specify your Node version
+
+ - name: Install Supabase CLI
+ run: npm install --global supabase@latest
+
+ - name: Install Rust
+ uses: actions-rs/toolchain@v1
+ with:
+ toolchain: stable
+ profile: minimal
+ override: true
+
+ - name: Cache Rust dependencies
+ uses: Swatinem/rust-cache@v2
+
+ - name: Install Diesel CLI
+ run: cargo install diesel_cli --no-default-features --features postgres
+
+ - name: Start Supabase
+ id: supabase_start
+ # Supabase start needs Docker
+ # Run in background, pipe output to file, then process file
+ run: |
+ supabase start &> supabase_output.log &
+ echo "Waiting for Supabase services to initialize..."
+ sleep 30 # Initial wait time, adjust as needed
+
+ # Wait for DB to be connectable - adjust port if supabase start uses a different default
+ n=0
+ until [ "$n" -ge 30 ] || pg_isready -h 127.0.0.1 -p 54322 -U postgres; do
+ n=$((n+1))
+ echo "Waiting for DB... Attempt $n/30"
+ sleep 2
+ done
+ if ! pg_isready -h 127.0.0.1 -p 54322 -U postgres; then
+ echo "::error::Supabase DB did not become ready in time."
+ cat supabase_output.log
+ exit 1
+ fi
+
+ echo "Supabase services seem ready. Extracting config..."
+ cat supabase_output.log
+
+ # Extract variables from supabase start output
+ # These grep patterns might need adjustment based on actual supabase cli output format
+ echo "DB_URL=$(grep 'DB URL:' supabase_output.log | sed 's/.*DB URL: *//')" >> $GITHUB_ENV
+ echo "SUPABASE_URL=$(grep 'API URL:' supabase_output.log | sed 's/.*API URL: *//')" >> $GITHUB_ENV
+ echo "SUPABASE_ANON_KEY=$(grep 'anon key:' supabase_output.log | sed 's/.*anon key: *//')" >> $GITHUB_ENV
+ echo "SUPABASE_SERVICE_ROLE_KEY=$(grep 'service_role key:' supabase_output.log | sed 's/.*service_role key: *//')" >> $GITHUB_ENV
+ echo "JWT_SECRET=$(grep 'JWT secret:' supabase_output.log | sed 's/.*JWT secret: *//')" >> $GITHUB_ENV
+
+ # Check if variables were extracted
+ if [ -z "${DB_URL}" ] || [ -z "${SUPABASE_URL}" ] || [ -z "${SUPABASE_ANON_KEY}" ] || [ -z "${SUPABASE_SERVICE_ROLE_KEY}" ] || [ -z "${JWT_SECRET}" ]; then
+ echo "::error::Failed to extract Supabase configuration from output."
+ cat supabase_output.log
+ exit 1
+ fi
+
+ echo "Supabase started and configured."
+
+ - name: Run Migrations
+ working-directory: ./api
+ run: diesel migration run
+ env:
+ # Use the DB URL extracted from supabase start
+ DATABASE_URL: ${{ env.DB_URL }}
+
+ - name: Seed Database
+ run: |
+ # Extract connection details from DB_URL (format: postgres://USER:PASS@HOST:PORT/DBNAME)
+ PGUSER=$(echo "${{ env.DB_URL }}" | awk -F '[/:]' '{print $4}')
+ PGPASSWORD=$(echo "${{ env.DB_URL }}" | awk -F '[:@]' '{print $3}')
+ PGHOST=$(echo "${{ env.DB_URL }}" | awk -F '[@:]' '{print $4}')
+ PGPORT=$(echo "${{ env.DB_URL }}" | awk -F '[:/]' '{print $6}')
+ PGDATABASE=$(echo "${{ env.DB_URL }}" | awk -F '/' '{print $NF}')
+
+ PGPASSWORD=$PGPASSWORD psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDATABASE -f ./api/libs/database/seed.sql
+ env:
+ DATABASE_URL: ${{ env.DB_URL }}
+
+ - name: Build API Server
+ working-directory: ./api
+ run: cargo build --release # Build release for potentially faster execution
+
+ - name: Run API Server
+ working-directory: ./api
+ run: ./target/release/server & # Run the built binary in the background
Greptile
greptile
style: No process ID capture or cleanup for background server process. This could leave orphaned processes
suggested fix
run: |
+ ./target/release/server &
+ echo "API_PID=$!" >> $GITHUB_ENV # Store PID for later cleanup
diff block
+import { LocalStorage } from "@raycast/api";
+import { ShareSession, StoredSession } from "./types";
+import { exec } from "child_process";
+import { promisify } from "util";
+
+const STORAGE_KEY = "sendme-sessions";
+const execAsync = promisify(exec);
+
+export const globalSessions = {
+ sessions: [] as ShareSession[],
+ listeners: new Set<() => void>(),
+
+ addSession(session: ShareSession) {
+ if (this.sessions.some((s) => s.id === session.id)) return;
+ this.sessions.push(session);
+ this.notifyListeners();
+ },
+
+ removeSession(id: string) {
+ this.sessions = this.sessions.filter((s) => s.id !== id);
+ this.notifyListeners();
+ },
+
+ getSessions() {
+ return [...this.sessions];
+ },
+
+ subscribe(listener: () => void) {
+ this.listeners.add(listener);
+ return () => {
+ this.listeners.delete(listener);
+ };
+ },
+
+ notifyListeners() {
+ this.listeners.forEach((listener) => listener());
+ },
+
+ getStorableSessions(): StoredSession[] {
+ return this.sessions
+ .filter((s) => s.pid !== undefined)
+ .map((s) => ({
+ id: s.id,
+ pid: s.pid as number,
+ ticket: s.ticket,
+ filePath: s.filePath,
+ fileName: s.fileName,
+ startTime: s.startTime.toISOString(),
+ }));
+ },
+
+ async persistSessions() {
+ try {
+ const sessions = this.getStorableSessions();
+ await LocalStorage.setItem(STORAGE_KEY, JSON.stringify(sessions));
+ } catch (error) {
+ console.error("Failed to persist sessions:", error);
+ }
+ },
+
+ async loadSessions() {
+ try {
+ const stored = await LocalStorage.getItem<string>(STORAGE_KEY);
+ if (stored) {
+ const sessions = JSON.parse(stored) as StoredSession[];
+ sessions.forEach((s) => {
+ const session: ShareSession = {
+ ...s,
+ process: null,
+ startTime: new Date(s.startTime),
+ isDetached: true,
+ };
+ this.addSession(session);
+ });
+ }
+ } catch (error) {
+ console.error("Failed to load sessions:", error);
+ }
+ },
+
+ async stopSession(id: string) {
+ const session = this.sessions.find((s) => s.id === id);
+ if (!session) return;
+
+ try {
+ if (session.isDetached && session.pid) {
+ try {
+ process.kill(session.pid);
+ } catch (e) {
+ await execAsync(`kill -9 ${session.pid}`);
+ }
+ } else if (session.process?.pid) {
+ try {
+ process.kill(-session.process.pid);
+ } catch (e) {
+ session.process.kill();
+ }
+ }
+
+ this.removeSession(id);
+ await this.persistSessions();
+ } catch (error) {
+ console.error("Error stopping session:", error);
+ throw error;
+ }
+ },
+
+ async stopAllSessions(): Promise<void> {
+ const sessionsToStop = [...this.sessions];
+
+ for (const session of sessionsToStop) {
+ try {
+ await this.stopSession(session.id);
+ } catch (error) {
+ console.error(`Error stopping session ${session.id}:`, error);
+ }
+ }
+
+ this.sessions = [];
+ this.notifyListeners();
+ await this.persistSessions();
Greptile
greptile
logic: Sessions array cleared after stopAllSessions may leave orphaned processes if some stops failed