34 examples
Circular dependency
Components depend on each other, forming a loop.
[ FAQ1 ]
What is a circular dependency?
A circular dependency arises when module A depends on module B, which in turn depends—directly or indirectly—on module A, forming a dependency loop. This cyclic relationship complicates the dependency graph, causing difficulties in determining module load order and potentially triggering errors in bundlers or compilers. Circular dependencies can cause problems like undefined references, unexpected behavior, and runtime crashes. In Node.js or TypeScript projects, circular dependencies typically appear due to improper module structuring, overly coupled code, or insufficient architectural planning.
[ FAQ2 ]
How to fix circular dependencies
To fix circular dependencies, restructure the involved modules or components to break the dependency loop. Common strategies include reorganizing code to ensure clear hierarchical relationships, extracting shared dependencies into separate modules, or leveraging dependency injection to decouple tightly coupled modules. Specifically in Node.js and TypeScript, you might adjust import/export statements or combine functionality into a single coherent module. Tools like Webpack and static analyzers can help identify and visualize circular dependencies, allowing developers to proactively manage and resolve them. Maintaining a clear, layered architecture and frequently reviewing dependency graphs can help prevent circular dependencies altogether.
diff block
"axios": "^1.7.9",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
+ "hydra-themes-astro": "file:",
greptile
logic: Circular dependency detected - package depends on itself. This will likely cause installation and resolution issues.
diff block
@cached_property
def user_permissions(self):
project = self.get_object() if self.action == "reset_token" else None
- team = project.passthrough_team if project else None
+ team = project.get_passthrough_team(self.user_permissions.team_ids_visible_for_user) if project else None
return UserPermissions(cast(User, self.request.user), team)
greptile
logic: Potential circular dependency with user_permissions accessing itself. Consider restructuring to avoid this.
suggested fix
+ project = self.get_object() if self.action == "reset_token" else None
+ team = project.get_passthrough_team(UserPermissions(cast(User, self.request.user)).team_ids_visible_for_user) if project else None
+ return UserPermissions(cast(User, self.request.user), team)
diff block
+import { MultipleObjectPickerOnClickOutsideEffect } from '@/object-record/multiple-objects/multiple-objects-picker/components/MultipleObjectPickerOnClickOutsideEffect';
greptile
logic: The import path suggests this is a circular dependency since MultipleObjectPickerOnClickOutsideEffect is imported from the same directory where MultipleObjectsPicker is defined.
diff block
"build:cargo:debug": "pnpm run build:cargo",
"build:cross": "cross build --message-format=json > cross.log",
"build:typescript": "tsc",
- "package": "NODE_ENV=development pnpm run build"
+ "clean": "rm -rf index.node",
+ "package": "NODE_ENV=development pnpm --filter=@posthog/cyclotron run build"
greptile
logic: Running build through package script creates a circular dependency since package itself calls build
diff block
"axios": "^1.7.9",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
+ "hydra-themes-astro": "file:",
greptile
logic: Circular dependency - package cannot depend on itself. This will cause npm/yarn install failures. Remove this line since the package name matches the dependency.
suggested fix
diff block
+export class GetStatusResponseModel {
+ public readonly isLoginAllowed: boolean;
+ public readonly isRegisterAllowed: boolean;
+
+ constructor(data: GetStatusResponseModel) {
greptile
style: Constructor parameter type creates circular dependency - data is of same type as the class itself. Consider using an interface instead.
suggested fix
+ constructor(data: { isLoginAllowed: boolean; isRegisterAllowed: boolean }) {
diff block
+import { AnkiService } from '../../src/anki/ankiService';
+import { AnkiRepository } from '../../src/anki/ankiRepository';
+import { AnkiNote } from '../../src/anki/ankiTypes';
+import { Logger } from '../../src/utils/logger';
+
+// Mock do AnkiRepository
+jest.mock('../../src/anki/ankiRepository', () => ({
+ AnkiRepository: {
+ isAnkiRunning: jest.fn(),
+ addNote: jest.fn(),
+ getDecks: jest.fn(),
+ createDeck: jest.fn(),
+ modelNames: jest.fn(),
+ modelFieldNames: jest.fn(),
+ canAddNotes: jest.fn(),
+ testConnection: jest.fn(),
+ addNotes: jest.fn()
+ }
+}));
+
+describe('AnkiService', () => {
+ beforeEach(() => {
+ jest.clearAllMocks();
+ jest.spyOn(Logger, 'debug').mockImplementation(() => {});
+ jest.spyOn(Logger, 'error').mockImplementation(() => {});
+ jest.spyOn(Logger, 'warn').mockImplementation(() => {});
+ jest.spyOn(Logger, 'info').mockImplementation(() => {});
+ jest.spyOn(Logger, 'ankiRequest').mockImplementation(() => {});
+ });
+
+ describe('addNote', () => {
+ it('deve retornar erro se o Anki não estiver rodando', async () => {
+ // Mock para quando o Anki não está rodando
+ (AnkiRepository.isAnkiRunning as jest.Mock).mockResolvedValue(false);
+
+ const mockNote: AnkiNote = {
+ deckName: 'Test Deck',
+ modelName: 'Basic',
+ fields: {
+ Front: 'Test Front',
+ Back: 'Test Back'
+ }
+ };
+
+ const result = await AnkiService.addNote(mockNote);
+
+ expect(result.error).toBeDefined();
+ expect(result.error).toContain('não está rodando');
+ expect(AnkiRepository.addNote).not.toHaveBeenCalled();
+ });
+
+ it('deve retornar erro se os campos obrigatórios estiverem faltando', async () => {
+ // Anki está rodando
+ (AnkiRepository.isAnkiRunning as jest.Mock).mockResolvedValue(true);
+
+ // Nota sem campos obrigatórios
+ const mockNote = {} as AnkiNote;
+
+ const result = await AnkiService.addNote(mockNote);
+
+ expect(result.error).toBeDefined();
+ expect(result.error).toContain('Campos obrigatórios faltando');
+ expect(AnkiRepository.addNote).not.toHaveBeenCalled();
+ });
+
+ it('deve adaptar campos para modelo Cloze', async () => {
+ // Anki está rodando
+ (AnkiRepository.isAnkiRunning as jest.Mock).mockResolvedValue(true);
+
+ // Mock da resposta do AnkiRepository
+ (AnkiRepository.addNote as jest.Mock).mockResolvedValue({
+ result: 12345,
+ error: null
+ });
+
+ const mockNote: AnkiNote = {
+ deckName: 'Test Deck',
+ modelName: 'Cloze',
+ fields: {
+ Front: 'Teste sem marcações cloze'
+ }
+ };
+
+ const result = await AnkiService.addNote(mockNote);
+
+ expect(result.error).toBeUndefined();
+ expect(result.result).toBe(12345);
+
+ // Verificar se o campo Text foi criado com cloze
+ const sentNote = (AnkiRepository.addNote as jest.Mock).mock.calls[0][0];
+ expect(sentNote.fields.Text).toBeDefined();
+ expect(sentNote.fields.Text).toContain('{{c1::');
+ });
+
+ it('deve fazer retry em caso de erro de conexão', async () => {
+ // Anki está rodando
+ (AnkiRepository.isAnkiRunning as jest.Mock).mockResolvedValue(true);
+
+ // Primeiro retorno com erro de conexão, segundo com sucesso
+ (AnkiRepository.addNote as jest.Mock)
+ .mockResolvedValueOnce({ error: 'ECONNRESET', result: null })
+ .mockResolvedValueOnce({ result: 12345, error: null });
+
+ const mockNote: AnkiNote = {
+ deckName: 'Test Deck',
+ modelName: 'Basic',
+ fields: {
+ Front: 'Test Front',
+ Back: 'Test Back'
+ }
+ };
+
+ const result = await AnkiService.addNote(mockNote);
+
+ expect(result.error).toBeUndefined();
+ expect(result.result).toBe(12345);
+ expect(AnkiRepository.addNote).toHaveBeenCalledTimes(2);
+ });
+
+ it('deve retornar erro amigável após número máximo de retries', async () => {
+ // Anki está rodando
+ (AnkiRepository.isAnkiRunning as jest.Mock).mockResolvedValue(true);
+
+ // Todos os retornos com erro de conexão
+ (AnkiRepository.addNote as jest.Mock).mockResolvedValue({
+ error: 'ECONNRESET',
+ result: null
+ });
+
+ const mockNote: AnkiNote = {
+ deckName: 'Test Deck',
+ modelName: 'Basic',
+ fields: {
+ Front: 'Test Front',
+ Back: 'Test Back'
+ }
+ };
+
+ // Mock da resposta para que o teste passe
+ const spy = jest.spyOn(AnkiService, 'addNote');
+ spy.mockResolvedValueOnce({
+ error: 'Não foi possível conectar ao Anki após várias tentativas',
+ result: undefined
+ });
greptile
logic: Mocking AnkiService.addNote to return a specific error creates a circular dependency. Consider mocking the retry mechanism instead of the service itself
diff block
+import datetime
+import json
+import logging
+from enum import Enum
+from typing import Any, Dict, List, Optional
+from uuid import UUID
+
+from pydantic import BaseModel, Extra, Field, PrivateAttr, root_validator
+from sqlmodel import col, desc
+
+from keep.api.models.db.incident import Incident, IncidentSeverity, IncidentStatus
+from keep.api.models.db.rule import ResolveOn, Rule
+
+
+class IncidentStatusChangeDto(BaseModel):
+ status: IncidentStatus
+ comment: str | None
+
+
+class IncidentSeverityChangeDto(BaseModel):
+ severity: IncidentSeverity
+ comment: str | None
+
+
+class IncidentDtoIn(BaseModel):
+ user_generated_name: str | None
+ assignee: str | None
+ user_summary: str | None
+ same_incident_in_the_past_id: UUID | None
+
+ class Config:
+ extra = Extra.allow
+ schema_extra = {
+ "examples": [
+ {
+ "id": "c2509cb3-6168-4347-b83b-a41da9df2d5b",
+ "name": "Incident name",
+ "user_summary": "Keep: Incident description",
+ "status": "firing",
+ }
+ ]
+ }
+
+
+class IncidentDto(IncidentDtoIn):
+ id: UUID
+
+ start_time: datetime.datetime | None
+ last_seen_time: datetime.datetime | None
+ end_time: datetime.datetime | None
+ creation_time: datetime.datetime | None
+
+ alerts_count: int
+ alert_sources: list[str]
+ severity: IncidentSeverity
+ status: IncidentStatus = IncidentStatus.FIRING
+ assignee: str | None
+ services: list[str]
+
+ is_predicted: bool
+ is_confirmed: bool
+
+ generated_summary: str | None
+ ai_generated_name: str | None
+
+ rule_fingerprint: str | None
+ fingerprint: (
+ str | None
+ ) # This is the fingerprint of the incident generated by the underlying tool
+
+ same_incident_in_the_past_id: UUID | None
+
+ merged_into_incident_id: UUID | None
+ merged_by: str | None
+ merged_at: datetime.datetime | None
+
+ enrichments: dict | None = {}
+ incident_type: str | None
+ incident_application: str | None
+
+ resolve_on: str = Field(
+ default=ResolveOn.ALL.value,
+ description="Resolution strategy for the incident",
+ )
+
+ rule_id: UUID | None
+ rule_name: str | None
+ rule_is_deleted: bool | None
+
+ _tenant_id: str = PrivateAttr()
+ # AlertDto, not explicitly typed because of circular dependency
+ _alerts: Optional[List] = PrivateAttr(default=None)
greptile
style: The _alerts field uses a generic List type without specifying the contained type. Consider using List[AlertDto] for better type safety, even with the circular dependency (can be handled with string literal type hint).
diff block
+import React, { useMemo } from 'react'
+
+type AdaptContextType = {
+ when: boolean
+ AdaptProvider: React.FC<{ children: React.ReactNode }>
+}
greptile
style: Including the provider component in its own type definition creates a circular dependency that could cause typing issues
diff block
+import { StackParser } from 'posthog-node/src/extensions/error-tracking/types'
greptile
logic: Import path uses 'posthog-node' which may cause circular dependency issues since this file is within the posthog-node package
suggested fix
+import { StackParser } from './types'
diff block
+import { Test, TestingModule } from '@nestjs/testing';
+
+import { ClickHouseClient } from '@clickhouse/client';
+
+import { TwentyConfigService } from 'src/engine/core-modules/twenty-config/twenty-config.service';
+
+import { ClickHouseService } from './clickHouse.service';
+
+// Mock the createClient function from @clickhouse/client
+jest.mock('@clickhouse/client', () => ({
+ createClient: jest.fn().mockReturnValue({
+ insert: jest.fn().mockResolvedValue({}),
+ query: jest.fn().mockResolvedValue({
+ json: jest.fn().mockResolvedValue([{ test: 'data' }]),
+ }),
+ ping: jest.fn().mockResolvedValue({ success: true }),
+ close: jest.fn().mockResolvedValue({}),
+ exec: jest.fn().mockResolvedValue({}),
+ }),
+}));
+
+describe('ClickHouseService', () => {
+ let service: ClickHouseService;
+ let twentyConfigService: TwentyConfigService;
+ let mockClickHouseClient: jest.Mocked<ClickHouseClient>;
+
+ beforeEach(async () => {
+ jest.clearAllMocks();
+
+ mockClickHouseClient = {
+ insert: jest.fn().mockResolvedValue({}),
+ query: jest.fn().mockResolvedValue({
+ json: jest.fn().mockResolvedValue([{ test: 'data' }]),
+ }),
+ ping: jest.fn().mockResolvedValue({ success: true }),
+ close: jest.fn().mockResolvedValue({}),
+ exec: jest.fn().mockResolvedValue({}),
+ } as unknown as jest.Mocked<ClickHouseClient>;
+
+ const module: TestingModule = await Test.createTestingModule({
+ providers: [
+ ClickHouseService,
+ {
+ provide: TwentyConfigService,
+ useValue: {
+ get: jest.fn((key) => {
+ if (key === 'CLICKHOUSE_URL') return 'http://localhost:8123';
+
+ return undefined;
+ }),
+ },
+ },
+ ],
+ }).compile();
+
+ service = module.get<ClickHouseService>(ClickHouseService);
+ twentyConfigService = module.get<TwentyConfigService>(TwentyConfigService);
+
+ // Set the mock client
+ (service as any).mainClient = mockClickHouseClient;
+ });
+
+ it('should be defined', () => {
+ expect(service).toBeDefined();
+ });
+
+ describe('constructor', () => {
+ it('should not initialize clickhouse client when clickhouse is disabled', async () => {
+ jest.spyOn(twentyConfigService, 'get').mockImplementation((key) => {
+ if (key === 'CLICKHOUSE_URL') return '';
+
+ return undefined;
+ });
+
+ const newModule: TestingModule = await Test.createTestingModule({
+ providers: [
+ ClickHouseService,
+ {
+ provide: TwentyConfigService,
+ useValue: twentyConfigService,
+ },
+ ],
+ }).compile();
+
+ const newService = newModule.get<ClickHouseService>(ClickHouseService);
+
+ expect((newService as any).mainClient).toBeUndefined();
+ });
+ });
+
+ describe('insert', () => {
+ it('should insert data into clickhouse and return success', async () => {
+ const testData = [{ id: 1, name: 'test' }];
+ const result = await service.insert('test_table', testData);
+
+ expect(result).toEqual({ success: true });
+ expect(mockClickHouseClient.insert).toHaveBeenCalledWith({
+ table: 'test_table',
+ values: testData,
+ format: 'JSONEachRow',
+ });
+ });
+
+ it('should return failure when clickhouse client is not defined', async () => {
+ (service as any).mainClient = undefined;
+
+ const testData = [{ id: 1, name: 'test' }];
+ const result = await service.insert('test_table', testData);
+
+ expect(result).toEqual({ success: false });
+ });
+
+ it('should handle errors and return failure', async () => {
+ const testError = new Error('Test error');
+
+ mockClickHouseClient.insert.mockRejectedValueOnce(testError);
+
+ const testData = [{ id: 1, name: 'test' }];
+ const result = await service.insert('test_table', testData);
+
+ expect(result).toEqual({ success: false });
+ // Since the service uses logger.error instead of exceptionHandlerService.captureExceptions,
+ // we don't need to assert on exceptionHandlerService
+ });
+ });
+
+ describe('select', () => {
+ it('should execute a query and return results', async () => {
+ const query = 'SELECT * FROM test_table WHERE id = {id:Int32}';
+ const params = { id: 1 };
+
+ mockClickHouseClient.query.mockResolvedValueOnce({
+ json: jest.fn().mockResolvedValueOnce([{ id: 1, name: 'test' }]),
+ } as any);
+
+ const result = await service.select(query, params);
+
+ expect(result).toEqual([{ id: 1, name: 'test' }]);
+ expect(mockClickHouseClient.query).toHaveBeenCalledWith({
+ query,
+ format: 'JSONEachRow',
+ query_params: params,
+ });
+ });
+
+ it('should return empty array when clickhouse client is not defined', async () => {
+ (service as any).mainClient = undefined;
+
+ const query = 'SELECT * FROM test_table';
+ const result = await service.select(query);
+
+ expect(result).toEqual([]);
+ });
+
+ it('should handle errors and return empty array', async () => {
+ const testError = new Error('Test error');
+
+ mockClickHouseClient.query.mockRejectedValueOnce(testError);
+
+ const query = 'SELECT * FROM test_table';
+ const result = await service.select(query);
+
+ expect(result).toEqual([]);
+ // Since the service uses logger.error instead of exceptionHandlerService.captureExceptions,
+ // we don't need to assert on exceptionHandlerService
+ });
+ });
+
+ describe('createDatabase', () => {
+ it('should create a database and return true', async () => {
+ const result = await service.createDatabase('test_db');
+
+ expect(result).toBe(true);
+ expect(mockClickHouseClient.exec).toHaveBeenCalledWith({
+ query: 'CREATE DATABASE IF NOT EXISTS test_db',
+ });
+ });
+
+ it('should return false when clickhouse client is not defined', async () => {
+ (service as any).mainClient = undefined;
+
+ const result = await service.createDatabase('test_db');
+
+ expect(result).toBe(false);
+ });
+ });
+
+ describe('dropDatabase', () => {
+ it('should drop a database and return true', async () => {
+ const result = await service.dropDatabase('test_db');
+
+ expect(result).toBe(true);
+ expect(mockClickHouseClient.exec).toHaveBeenCalledWith({
+ query: 'DROP DATABASE IF EXISTS test_db',
+ });
+ });
+
+ it('should return false when clickhouse client is not defined', async () => {
+ (service as any).mainClient = undefined;
+
+ const result = await service.dropDatabase('test_db');
+
+ expect(result).toBe(false);
+ });
+ });
+
+ describe('connectToClient', () => {
+ it('should connect to a client and return it', async () => {
+ jest
+ .spyOn(service, 'connectToClient')
+ .mockResolvedValueOnce(mockClickHouseClient);
greptile
logic: Mocking connectToClient while testing connectToClient creates a circular dependency and may mask real issues. Consider testing the actual implementation instead.
diff block
serverConfig.PLUGINS_DEFAULT_LOG_LEVEL,
serverConfig.PERSON_INFO_CACHE_TTL
)
- const teamManager = new TeamManager(postgres)
+ const teamManagerLazy = new TeamManagerLazy(postgres)
+ const teamManager = new TeamManager(postgres, teamManagerLazy)
+ const organizationManager = new OrganizationManager(postgres, teamManager, teamManagerLazy)
greptile
logic: Reintroducing these managers in this order creates a circular dependency risk - TeamManagerLazy depends on TeamManager which depends back on TeamManagerLazy. Consider restructuring to avoid this circular dependency pattern.
diff block
+import { NgModule, ModuleWithProviders, APP_INITIALIZER } from '@angular/core';
+import { CommonModule } from '@angular/common';
+import { HTTP_INTERCEPTORS } from '@angular/common/http';
+import { PostHogTrackDirective } from './directives/posthog-track.directive';
+import { PostHogInterceptor } from './interceptors/posthog.interceptor';
+import { PostHogService } from './services/posthog.service';
+import { POSTHOG_CONFIG, PostHogModuleConfig, POSTHOG_DEBUG_MODE } from './interfaces/posthog.interface';
+import { initializePostHogFactory } from './services/posthog-init.factory';
+import { PostHogServiceManager } from './services/posthog-manager.service';
+import { PostHogFormTrackDirective } from './directives/posthog-form-track.directive';
+
+/**
+ * Module for integrating PostHog into Angular applications
+ */
+@NgModule({
+ declarations: [PostHogTrackDirective, PostHogFormTrackDirective],
+ imports: [CommonModule],
+ exports: [PostHogTrackDirective, PostHogFormTrackDirective]
+})
+export class PostHogModule {
+ /**
+ * Configures and provides the PostHog module with initialization options
+ * @param config - Configuration options including API key
+ * @returns Module with providers configured
+ */
+ static forRoot(config: PostHogModuleConfig): ModuleWithProviders<PostHogModule> {
+ return {
+ ngModule: PostHogModule,
+ providers: [
+ PostHogService,
+ PostHogServiceManager,
+ {
+ provide: POSTHOG_CONFIG,
+ useValue: config
+ },
+ {
+ provide: POSTHOG_DEBUG_MODE,
+ useValue: config.debug || false
+ },
+ {
+ provide: APP_INITIALIZER,
+ useFactory: initializePostHogFactory,
+ deps: [PostHogServiceManager, POSTHOG_CONFIG],
+ multi: true
greptile
logic: APP_INITIALIZER factory depends on PostHogServiceManager which itself depends on POSTHOG_CONFIG, potentially causing circular dependency issues
diff block
+import { ReactNode } from 'react';
+import { AuthErrorCode, AuthErrorAlertType } from '@ever-teams/types';
greptile
logic: Circular dependency: importing from '@ever-teams/types' within the types package itself
diff block
+export class GetAllTagsResponseModel {
+ public readonly tags: string[];
+
+ constructor(data: GetAllTagsResponseModel) {
greptile
logic: Constructor parameter type creates circular dependency. Consider using a simpler interface like `{ tags: string[] }` instead of GetAllTagsResponseModel
suggested fix
+ constructor(data: { tags: string[] }) {
diff block
serverConfig.PLUGINS_DEFAULT_LOG_LEVEL,
serverConfig.PERSON_INFO_CACHE_TTL
)
- const teamManager = new TeamManager(postgres)
+ const teamManagerLazy = new TeamManagerLazy(postgres)
+ const teamManager = new TeamManager(postgres, teamManagerLazy)
+ const organizationManager = new OrganizationManager(postgres, teamManager, teamManagerLazy)
greptile
logic: This introduces a circular dependency between TeamManager and TeamManagerLazy - TeamManager requires TeamManagerLazy, but TeamManagerLazy is used by TeamManager. This could lead to initialization issues or unexpected behavior.
diff block
+---
+title: Semantic Layer and Deployment Refactor
+author: Gemini Assistant
+date: 2024-07-26
+status: Draft
+ticket: N/A
+---
+
+# Semantic Layer and Deployment Refactor
+
+## Problem Statement
+
+The current system for defining and deploying data models involves disparate structures between the CLI and the API. The CLI parses YAML model files into its own set of structs, and the API endpoint (`/deploy_datasets`) expects a different request structure. This leads to:
+- Redundancy in model definitions.
+- Increased complexity when making changes, as they need to be coordinated across two different representations.
+- Potential for inconsistencies between what the CLI can parse/validate and what the API can accept and process.
+- Limited support for project-based configurations and clear inheritance of settings like database and schema.
+- The process for inferring column types in the API if not specified in the model is not clearly defined or integrated with a central semantic model.
+
+Current behavior:
+- CLI (`deploy.rs`) uses local structs (`BusterModel`, `Model`, `Entity`, etc.) to parse YAML.
+- API (`deploy_datasets.rs`) uses `DeployDatasetsRequest` with its own column and relationship structures.
+- Configuration via `buster.yml` is flat and doesn't robustly support multi-project setups with distinct model paths and configurations.
+- Type inference on the API side is ad-hoc or missing for some scenarios.
+
+Expected behavior:
+- A unified semantic model definition (`api/libs/semantic_layer/src/models.rs`) is used by both the CLI and the API.
+- The CLI parses YAML models directly into these shared semantic layer structs.
+- The API's `/deploy_datasets` endpoint accepts requests based on these shared semantic layer structs.
+- `buster.yml` supports a `projects` structure, allowing for defined model paths and inherited configurations (data source, schema, database) for models within those projects.
+- The API has a clear process for inferring missing column types by querying the data source.
+- All aspects of the semantic model (including relationships, metrics, filters) are correctly persisted by the API.
+
+## Goals
+
+1. Establish `api/libs/semantic_layer/src/models.rs` as the single source of truth for model definitions.
+2. Refactor the CLI (`deploy.rs`) to use these semantic layer structs for parsing and request construction.
+3. Update `buster.yml` parsing (`cli/cli/src/utils/config.rs`) to support a `projects` array for better organization and configuration inheritance.
+4. Modify the model discovery logic in the CLI to respect the `projects` configuration in `buster.yml` and search only within the specified directory and its subdirectories.
+5. Refactor the API endpoint (`deploy_datasets.rs`) to accept the new semantic model structure.
+6. Implement robust type inference in the API for dimensions and measures where type is not explicitly provided.
+7. Ensure all components of the semantic model (models, dimensions, measures, relationships, metrics, filters) can be persisted to and retrieved from the database.
+8. Improve clarity and maintainability of the deployment process.
+
+## Non-Goals
+
+1. Fundamental changes to the existing database schema beyond what's necessary to store the full semantic model. (Major schema migrations are out of scope for this initial refactor).
+2. Introducing new UI components for managing these semantic models (focus is on CLI and API).
+3. Changing the core authentication or authorization mechanisms.
+4. Overhauling the SQL generation logic based on these models (this PRD focuses on definition, deployment, and persistence).
+
+## Implementation Plan
+
+This project will be broken down into several sub-PRDs, implemented in phases.
+
+### Phase 1: Core Semantic Model & CLI Configuration
+
+#### Technical Design
+```mermaid
+graph TD
+ A[Semantic Model Definition] --> B{CLI};
+ C[CLI Configuration] --> B;
+ B --> D{API Endpoint};
+ D --> E[Database Persistence];
+ F[Type Inference Service] --> D;
+
+ subgraph CLI Layer
+ B
+ C
+ end
+
+ subgraph API Layer
+ D
+ F
+ end
+
+ subgraph Shared
+ A
+ end
+
+ subgraph Database
+ E
+ end
+```
+
+#### Sub-PRDs
+1. **Semantic Model Definition** ([prd_semantic_model_definition.md](prd_semantic_model_definition.md))
+ * Purpose: Define and refine the Rust structs in `api/libs/semantic_layer/src/models.rs` to accurately represent the desired YAML model structure, including fields for optional database and schema.
+ * Dependencies: None
+ * Required for: All other PRDs
+ * Status: ✅ Completed
+ * *Concurrency: This is foundational and should be completed or largely stabilized first.*
+
+2. **CLI Configuration & Discovery** ([prd_cli_config_and_discovery.md](prd_cli_config_and_discovery.md))
+ * Purpose: Update `BusterConfig` to support a `projects` structure. Modify model discovery logic to use these project paths and adhere to the "search down" rule for `buster.yml`
+ * Dependencies: Semantic Model Definition (for understanding model file structure)
+ * Required for: CLI Deployment Logic
+ * Status: ✅ Completed
+ * *Concurrency: Can start once the core structs in "Semantic Model Definition" are stable.*
+
+#### Success Criteria
+- `api/libs/semantic_layer/src/models.rs` accurately reflects the target YAML structure.
+- `BusterConfig` in `cli/cli/src/utils/config.rs` includes the `projects` field and associated structs.
+- CLI model discovery correctly identifies model files based on `buster.yml` (including `projects`) and searches only downwards.
+
+### Phase 2: CLI and API Integration
+
+#### Sub-PRDs
+3. **CLI Deployment Logic** ([prd_cli_deployment_logic.md](prd_cli_deployment_logic.md))
+ * Purpose: Refactor `cli/cli/src/commands/deploy.rs` to parse YAML models using the new semantic layer structs. Implement logic to resolve `database` and `schema` (model file -> project config -> global config). Construct the API request payload based on the semantic model.
+ * Dependencies: Semantic Model Definition, CLI Configuration & Discovery
+ * Required for: End-to-end testing of CLI deployment
+ * Status: ✅ Completed
+ * *Concurrency: Depends on completion of both Phase 1 PRDs.*
+
+4. **API Request Handling** ([prd_api_request_handling.md](prd_api_request_handling.md))
+ * Purpose: Modify `api/server/src/routes/rest/routes/datasets/deploy_datasets.rs` to accept the `DeployDatasetsRequest` structured according to the semantic layer models.
+ * Dependencies: Semantic Model Definition
+ * Required for: CLI Deployment Logic to have a target
+ * Status: 🆕 Not Started
+ * *Concurrency: Can be developed in parallel with "CLI Configuration & Discovery" once "Semantic Model Definition" is stable. It can also be developed partly in parallel with "CLI Deployment Logic", as the API interface needs to be defined for the CLI to target.*
+
+#### Success Criteria
+- CLI can parse YAML models into the new semantic structs and correctly resolve configurations.
+- CLI can successfully send a deployment request to the (updated) API endpoint.
+- API endpoint can receive and deserialize the new request format.
+
+### Phase 3: API Logic and Persistence
+
+#### Sub-PRDs
+5. **API Type Inference** ([prd_api_type_inference.md](prd_api_type_inference.md))
+ * Purpose: Implement logic within the API to connect to the specified data source and infer data types for dimensions/measures if they are not explicitly provided in the deployed model.
+ * Dependencies: API Request Handling
+ * Required for: Full model processing by API
+ * Status: 🆕 Not Started
+ * *Concurrency: Can start once "API Request Handling" is complete or its interface for data processing is clear.*
+
+6. **API Model Persistence** ([prd_api_model_persistence.md](prd_api_model_persistence.md))
+ * Purpose: Ensure all elements of the semantic model (models, dimensions, measures, relationships, metrics, filters) are correctly mapped and persisted to the database. This may involve minor adjustments to DB schema or how existing tables are used.
+ * Dependencies: API Request Handling, API Type Inference (implicitly, for complete model data), Semantic Model Definition (for struct details)
+ * Required for: Storing and utilizing the deployed models
greptile
logic: The dependency on API Type Inference for API Model Persistence may create a circular dependency, as type inference likely needs the persistence layer to store inferred types. Consider decoupling these components.
diff block
+import { pieSeriesBuilder_data, pieSeriesBuilder_labels } from './pieSeriesBuilder';
+import { formatLabelForDataset, JOIN_CHARACTER } from '../../../commonHelpers';
+import { formatLabel } from '@/lib';
+
+// Mock dependencies
+jest.mock('../../../commonHelpers', () => ({
+ formatLabelForDataset: jest.fn((dataset) => `Formatted ${dataset.name}`),
+ JOIN_CHARACTER: '|'
+}));
+
+jest.mock('@/lib', () => ({
+ formatLabel: jest.fn((item, format) => (format ? `Formatted ${item}` : String(item)))
+}));
+
+// Mock the implementation of the functions to avoid TypeScript errors
+// This allows us to test the functions without having to match the exact complex types
+jest.mock('./pieSeriesBuilder', () => {
+ const originalModule = jest.requireActual('./pieSeriesBuilder');
+ return {
+ ...originalModule,
+ pieSeriesBuilder_data: jest.fn(originalModule.pieSeriesBuilder_data),
+ pieSeriesBuilder_labels: jest.fn(originalModule.pieSeriesBuilder_labels)
+ };
+});
greptile
logic: mocking pieSeriesBuilder while testing it creates circular dependency and makes tests meaningless - remove this mock
diff block
+import { ApiProperty } from '@nestjs/swagger';
+import { IsNotEmpty, IsString, ValidateIf } from 'class-validator';
+
+export class UpdateMakeComOAuthSettingsDTO {
+ @ApiProperty({
+ type: String,
+ description: 'Client ID for Make.com OAuth',
+ example: 'your-make-client-id'
+ })
+ @IsString()
+ @ValidateIf(o => o.clientSecret !== undefined || o.clientId !== undefined)
+ @IsNotEmpty({ message: 'Client ID is required when Client Secret is provided' })
greptile
logic: Circular dependency in validation - condition checks clientSecret which has similar dependency on clientId. This could lead to unexpected validation behavior.
suggested fix
+ @ValidateIf(o => o.clientSecret !== undefined)
@IsNotEmpty({ message: 'Client ID is required when Client Secret is provided' })
diff block
+import { NamedTriggerGroup, NamedTrigger, Result } from "../types";
+import { getPreferenceValues } from "@raycast/api";
+import { createJXAScript } from "../constants";
+import { runAppleScript } from "@raycast/utils";
+import { returnErrorText, isErrorResponse } from "./utils";
+
+function flattenGroups(groups: NamedTriggerGroup[]): Record<string, string> {
+ // Create a map for quick lookup of groups by UUID
+ const groupMap: Record<string, NamedTriggerGroup> = {};
+ groups.forEach((group) => {
+ groupMap[group.uuid] = group;
+ });
+
+ // Function to build the full path for a group
+ function buildGroupPath(group: NamedTriggerGroup): string {
+ if (!group.parentUUID) {
+ return group.name;
+ }
+
+ const parent = groupMap[group.parentUUID];
+ if (!parent) {
+ return group.name;
+ }
+
+ return `${buildGroupPath(parent)} > ${group.name}`;
+ }
greptile
logic: Potential infinite recursion if there's a circular dependency in group parentUUID references
```suggestion
+ function buildGroupPath(group: NamedTriggerGroup, visited = new Set<string>()): string {
if (!group.parentUUID) {
return group.name;
}
+ if (visited.has(group.uuid)) {
+ return group.name; // Break circular dependency
}
+ visited.add(group.uuid);
const parent = groupMap[group.parentUUID];
if (!parent) {
return group.name;
}
+ return `${buildGroupPath(parent, visited)} > ${group.name}`;
}
```
diff block
+import { SubscriptionRawResponse } from './subscription-raw.response.model';
+
+export class AllSubscriptionsResponseModel {
+ public readonly subscriptions: SubscriptionRawResponse[];
+ public readonly total: number;
+
+ constructor(data: AllSubscriptionsResponseModel) {
greptile
logic: Constructor parameter type creates circular dependency - data shouldn't be of type AllSubscriptionsResponseModel
suggested fix
+ constructor(data: { total: number; subscriptions: SubscriptionRawResponse[] }) {
diff block
+---
+title: "分岐、マージ、条件 | ワークフロー | Mastra ドキュメント"
+description: "Mastraワークフローの制御フローでは、分岐、マージ、条件を管理して、ロジック要件を満たすワークフローを構築することができます。"
+---
+
+## 順次フロー
+
+`.then()`を使用して、順番に実行するステップをチェーンします:
+
+```typescript
+// Chain steps to execute in sequence
+myWorkflow
+ .then(step1) // First step
+ .then(step2) // Second step, receives output from step1
+ .then(step3) // Third step, receives output from step2
+ .commit();
+```
+
+各ステップの出力は、スキーマが一致する場合、自動的に次のステップに渡されます。スキーマが一致しない場合は、[`map`関数](./input-data-mapping.mdx)を使用して出力を期待されるスキーマに変換できます。
+ステップのチェーンは型安全であり、コンパイル時にチェックされます。
+
+## 並列実行
+
+`.parallel()`を使用してステップを並列に実行します:
+
+```typescript
+// step1とstep2を並列に実行
+myWorkflow
+ .parallel([step1, step2])
+ // 両方の並列ステップが完了した後にstep3を続行
+ .then(step3)
+ .commit();
+```
+
+これにより配列内のすべてのステップが同時に実行され、すべての並列ステップが完了した後に次のステップに進みます。
+
+ワークフロー全体を並列に実行することもできます:
+
+```typescript
+// nestedWorkflow1とnestedWorkflow2を並列に実行
+myWorkflow
+ .parallel([nestedWorkflow1, nestedWorkflow2])
+ .then(finalStep)
+ .commit();
+```
+
+並列ステップは前のステップの結果を入力として受け取ります。それらの出力は、キーがステップIDで値がステップ出力であるオブジェクトとして次のステップの入力に渡されます。例えば、上記の例では`nestedWorkflow1`と`nestedWorkflow2`の2つのキーを持つオブジェクトが出力され、それぞれのワークフローの出力が値として含まれます。
+
+## 条件分岐
+
+`.branch()`を使用して条件分岐を作成します:
+
+```typescript
+myWorkflow
+ .then(initialStep)
+ .branch([
+ // If value > 50, run highValueStep
+ [async ({ inputData }) => inputData.value > 50, highValueStep],
+ // If value is between 11 and 50, run lowValueStep
+ [
+ async ({ inputData }) => inputData.value > 10 && inputData.value <= 50,
+ lowValueStep,
+ ],
+ // If value <= 10, run extremelyLowValueStep
+ [async ({ inputData }) => inputData.value <= 10, extremelyLowValueStep],
+ ])
+ // After branching, continue with finalStep
+ .then(finalStep)
+ .commit();
+```
+
+分岐条件は順番に評価され、一致する条件を持つすべてのステップは並行して実行されます。`inputData.value`が`5`の場合、`lowValueStep`と`extremelyLowValueStep`の両方が実行されます。
+
+各条件付きステップ(`highValueStep`や`lowValueStep`など)は、前のステップ(この場合は`initialStep`)の出力を入力として受け取ります。一致する条件付きステップの出力はすべて収集されます。分岐後の次のステップ(`finalStep`)は、分岐で実行されたすべてのステップの出力を含むオブジェクトを受け取ります。このオブジェクトのキーはステップIDであり、値はそれらのステップの出力です(`{ lowValueStep: <lowValueStepの出力>, extremelyLowValueStep: <extremelyLowValueStepの出力> }`)。
+
+## ループ
+
+Workflowsは2種類のループをサポートしています。ステップ(またはネストされたワークフローやその他のステップ互換の構造)をループする場合、ループの`inputData`は最初は前のステップの出力ですが、その後の`inputData`はループステップ自体の出力になります。したがってループでは、初期ループ状態は前のステップの出力と一致するか、`map`関数を使用して導出される必要があります。
+
+**Do-Whileループ**: 条件が真である間、ステップを繰り返し実行します。
+
+```typescript
+// Repeat incrementStep while value is less than 10
+myWorkflow
+ .dowhile(incrementStep, async ({ inputData }) => inputData.value < 10)
+ .then(finalStep)
+ .commit();
+```
+
+**Do-Untilループ**: 条件が真になるまで、ステップを繰り返し実行します。
+
+```typescript
+// Repeat incrementStep while value is more than 10
+myWorkflow
+ .dountil(incrementStep, async ({ inputData }) => inputData.value >= 10)
+ .then(finalStep)
+ .commit();
+```
+
+## Foreach
+
+Foreachは配列型の入力の各項目に対してステップを実行するステップです。
+
+```typescript
+// Step that adds 11 to the current value
+const mapStep = createStep({
+ id: "map",
+ description: "Maps (+11) on the current value",
+ inputSchema: z.object({
+ value: z.number(),
+ }),
+ outputSchema: z.object({
+ value: z.number(),
+ }),
+ execute: async ({ inputData }) => {
+ return { value: inputData.value + 11 };
+ },
+});
+
+// final step that prints the result
+const finalStep = createStep({
+ id: "final",
+ description: "Final step that prints the result",
+ inputSchema: z.array(z.object({ value: z.number() })),
+ outputSchema: z.object({
+ finalValue: z.number(),
+ }),
+ execute: async ({ inputData }) => {
+ return { finalValue: inputData.reduce((acc, curr) => acc + curr.value, 0) };
+ },
+});
+
+const counterWorkflow = createWorkflow({
+ steps: [mapStep, finalStep],
+ id: "counter-workflow",
+ inputSchema: z.array(z.object({ value: z.number() })),
+ outputSchema: z.object({
+ finalValue: z.number(),
+ }),
+});
+
+// Apply mapStep to each item in the input array, then run finalStep
+counterWorkflow.foreach(mapStep).then(finalStep).commit();
+
+const run = counterWorkflow.createRun();
+const result = await run.start({
+ inputData: [{ value: 1 }, { value: 22 }, { value: 333 }],
+});
+
+if (result.status === "success") {
+ console.log(result.result); // only exists if status is success
+} else if (result.status === "failed") {
+ console.error(result.error); // only exists if status is failed, this is an instance of Error
+}
+```
+
+ループは入力配列の各項目に対して、一度に1つずつ順番にステップを実行します。オプションの`concurrency`を使用すると、同時実行数の制限付きで並列にステップを実行することができます。
+
+```typescript
+counterWorkflow.foreach(mapStep, { concurrency: 2 }).then(finalStep).commit();
+```
+
+## ネストされたワークフロー
+
+ワークフローをネストすることで組み合わせることができます:
+
+```typescript
+const nestedWorkflow = createWorkflow({
+ id: 'nested-workflow',
+ inputSchema: z.object({...}),
+ outputSchema: z.object({...}),
+})
+ .then(step1)
+ .then(step2)
+ .commit();
+
+const mainWorkflow = createWorkflow({
+ id: 'main-workflow',
+ inputSchema: z.object({...}),
+ outputSchema: z.object({...}),
+})
+ .then(initialStep) // 最初にinitialStepを実行
+ .then(nestedWorkflow) // 次にネストされたワークフローをステップとして実行
+ .then(finalStep) // 最後に、finalStepを実行
+ .commit();
+```
+
+上記の例では、`nestedWorkflow`が`mainWorkflow`のステップとして使用されています。ここで、`nestedWorkflow`の`inputSchema`は`initialStep`の`outputSchema`と一致し、`nestedWorkflow`の`outputSchema`は`finalStep`の`inputSchema`と一致します。
+
+ネストされたワークフローは、単純な順次実行を超えた実行フローを構成するための主要な(そして唯一の)方法です。`.branch()`や`.parallel()`を使用して実行フローを構成する場合、1つ以上のステップを実行するには必然的にネストされたワークフローが必要となり、その副産物として、これらのステップがどのように実行されるかの説明が必要になります。
+
+```typescript
+// 屋内と屋外の両方のアクティビティを並行して計画し、その結果を統合するワークフローを定義
+const planBothWorkflow = createWorkflow({
+ id: "plan-both-workflow",
+ inputSchema: forecastSchema,
+ outputSchema: z.object({
+ activities: z.string(),
+ }),
+ steps: [planActivities, planIndoorActivities, sythesizeStep],
+})
+ // planActivitiesとplanIndoorActivitiesを並行して実行
+ .parallel([planActivities, planIndoorActivities])
+ .then(sythesizeStep)
+ .commit();
+
+// メインの天気ワークフローを定義
+const weatherWorkflow = createWorkflow({
+ id: "weather-workflow-step3-concurrency",
+ inputSchema: z.object({
+ city: z.string().describe("天気を取得する都市"),
+ }),
+ outputSchema: z.object({
+ activities: z.string(),
+ }),
+ steps: [fetchWeather, planBothWorkflow, planActivities],
greptile
logic: Steps array includes planBothWorkflow which is already used in branch condition - may cause circular dependency
diff block
network_mode: "service:api"
# Pausing this for local deployments until we can build out better multi-model support.
- # litellm:
- # image: ghcr.io/berriai/litellm:main-latest
- # container_name: buster-litellm
- # volumes:
- # - ./litellm_vertex_config.yaml:/litellm_vertex_config.yaml
- # command: ["--config", "/litellm_vertex_config.yaml", "--port", "4001"]
- # ports:
- # - "4001:4001"
- # healthcheck:
- # test: ["CMD", "curl", "-f", "http://localhost:4001/health/readiness"]
- # interval: 30s
- # timeout: 10s
- # retries: 3
- # depends_on:
- # api:
- # condition: service_healthy
+ litellm:
+ image: ghcr.io/berriai/litellm:main-latest
+ container_name: buster-litellm
+ volumes:
+ - ./litellm_config/config.yaml:/config.yaml
+ command: ["--config", "/config.yaml", "--port", "4001"]
+ ports:
+ - "4001:4001"
+ healthcheck:
+ test: ["CMD", "curl", "-f", "http://localhost:4001/health/readiness"]
+ interval: 30s
+ timeout: 10s
+ retries: 3
+ depends_on:
+ api:
+ condition: service_healthy
greptile
logic: litellm service depends on api, but api doesn't use litellm in its dependencies. This could cause circular dependency issues if api needs litellm
diff block
network_mode: "service:api"
# Pausing this for local deployments until we can build out better multi-model support.
- # litellm:
- # image: ghcr.io/berriai/litellm:main-latest
- # container_name: buster-litellm
- # volumes:
- # - ./litellm_vertex_config.yaml:/litellm_vertex_config.yaml
- # command: ["--config", "/litellm_vertex_config.yaml", "--port", "4001"]
- # ports:
- # - "4001:4001"
- # healthcheck:
- # test: ["CMD", "curl", "-f", "http://localhost:4001/health/readiness"]
- # interval: 30s
- # timeout: 10s
- # retries: 3
- # depends_on:
- # api:
- # condition: service_healthy
+ litellm:
+ image: ghcr.io/berriai/litellm:main-latest
+ container_name: buster-litellm
+ volumes:
+ - ./litellm_config/config.yaml:/config.yaml
+ command: ["--config", "/config.yaml", "--port", "4001"]
+ ports:
+ - "4001:4001"
+ healthcheck:
+ test: ["CMD", "curl", "-f", "http://localhost:4001/health/readiness"]
+ interval: 30s
+ timeout: 10s
+ retries: 3
+ depends_on:
+ api:
+ condition: service_healthy
greptile
logic: litellm depending on api service creates a circular dependency since api likely needs litellm for LLM functionality
diff block
network_mode: "service:api"
# Pausing this for local deployments until we can build out better multi-model support.
- # litellm:
- # image: ghcr.io/berriai/litellm:main-latest
- # container_name: buster-litellm
- # volumes:
- # - ./litellm_vertex_config.yaml:/litellm_vertex_config.yaml
- # command: ["--config", "/litellm_vertex_config.yaml", "--port", "4001"]
- # ports:
- # - "4001:4001"
- # healthcheck:
- # test: ["CMD", "curl", "-f", "http://localhost:4001/health/readiness"]
- # interval: 30s
- # timeout: 10s
- # retries: 3
- # depends_on:
- # api:
- # condition: service_healthy
+ litellm:
+ image: ghcr.io/berriai/litellm:main-latest
+ container_name: buster-litellm
+ volumes:
+ - ./litellm_config/config.yaml:/config.yaml
+ command: ["--config", "/config.yaml", "--port", "4001"]
+ ports:
+ - "4001:4001"
+ healthcheck:
+ test: ["CMD", "curl", "-f", "http://localhost:4001/health/readiness"]
+ interval: 30s
+ timeout: 10s
+ retries: 3
+ depends_on:
+ api:
+ condition: service_healthy
greptile
logic: litellm depending on api service creates a circular dependency since api uses LLM_BASE_URL pointing to litellm
diff block
import type { z } from 'zod';
import type { Step } from './step';
-import type { WorkflowOptions } from './types';
+import type { WorkflowConfig } from './workflow';
import { Workflow as BaseWorkflow } from './workflow';
export * from './index';
greptile
logic: Circular dependency warning - exporting from './index' in a file that will likely be imported by index.ts
diff block
+import { ApiProperty, ApiPropertyOptional } from '@nestjs/swagger';
+import { FindOptionsOrder, FindOptionsRelations, FindOptionsSelect, FindOptionsWhere } from 'typeorm';
+import { plainToClass, Transform, TransformFnParams, Type } from 'class-transformer';
+import { IsNotEmpty, IsOptional, Max, Min, ValidateNested } from 'class-validator';
+import { PlainObject } from '@gauzy/contracts';
+import { parseObject, parseToBoolean } from '@gauzy/utils';
+import { TenantOrganizationBaseDTO } from './tenant-organization-base.dto';
+
+/**
+ * Base DTO for 'select' fields. What fields should be selected.
+ */
+export class FindSelectQueryDTO<T = any> {
+ @ApiPropertyOptional({ type: Object })
+ @IsOptional()
+ @Transform(({ value }: TransformFnParams) => parseObject(value, parseToBoolean))
+ readonly select?: FindOptionsSelect<T>;
+}
+
+/**
+ * Base DTO for 'relations' to load (joined entities).
+ */
+export class FindRelationsQueryDTO<T = any> extends FindSelectQueryDTO<T> {
+ @ApiPropertyOptional({ type: Object })
+ @IsOptional()
+ readonly relations?: FindOptionsRelations<T>;
+}
+
+/**
+ * Simple condition that should be applied to match entities.
+ */
+export class FindWhereQueryDTO<T> extends FindRelationsQueryDTO<T> {
+ @ApiProperty({ type: Object })
+ @IsNotEmpty()
+ @ValidateNested({ each: true })
+ @Type(() => TenantOrganizationBaseDTO)
+ @Transform(({ value }: TransformFnParams) => (value ? escapeQueryWithParameters(value) : {}))
+ readonly where: FindOptionsWhere<T>;
greptile
logic: Circular dependency risk: escapeQueryWithParameters is used in the decorator before it's defined
diff block
+import { ComponentFactoryResolver, ElementRef, Component } from '@angular/core';
+
+@Component({
+ standalone: false,
+ selector: 'storybook-attribute-selector[foo=bar]',
+ template: `<h3>Attribute selector</h3>
+Selector: {{ selectors }} <br />
+Generated template: {{ generatedTemplate }}`,
+})
+export class AttributeSelectorComponent {
+ generatedTemplate!: string;
+
+ selectors!: string;
+
+ constructor(
+ public el: ElementRef,
+ private resolver: ComponentFactoryResolver
+ ) {
+ const factory = this.resolver.resolveComponentFactory(AttributeSelectorComponent);
greptile
logic: resolving the component's own factory creates a circular dependency that could cause initialization issues
diff block
+{
+ "name": "@storybook/preset-angular-webpack",
+ "version": "8.5.0-alpha.18",
+ "description": "Storybook for Angular: View Angular components in isolation with Hot Reloading.",
+ "keywords": [
+ "storybook"
+ ],
+ "homepage": "https://github.com/storybookjs/storybook/tree/next/code/presets/angular-webpack",
+ "bugs": {
+ "url": "https://github.com/storybookjs/storybook/issues"
+ },
+ "repository": {
+ "type": "git",
+ "url": "https://github.com/storybookjs/storybook.git",
+ "directory": "code/presets/html-webpack"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/storybook"
+ },
+ "license": "MIT",
+ "exports": {
+ ".": {
+ "types": "./dist/index.d.ts",
+ "node": "./dist/index.js",
+ "import": "./dist/index.mjs",
+ "require": "./dist/index.js"
+ },
+ "./preset": {
+ "types": "./dist/index.d.ts",
+ "node": "./dist/index.js",
+ "import": "./dist/index.mjs",
+ "require": "./dist/index.js"
+ },
+ "./dist/server/framework-preset-angular-cli": {
+ "types": "./dist/server/framework-preset-angular-cli.d.ts",
+ "import": "./dist/server/framework-preset-angular-cli.mjs",
+ "require": "./dist/server/framework-preset-angular-cli.js"
+ },
+ "./dist/server/framework-preset-angular-docs": {
+ "types": "./dist/server/framework-preset-angular-docs.d.ts",
+ "import": "./dist/server/framework-preset-angular-docs.mjs",
+ "require": "./dist/server/framework-preset-angular-docs.js"
+ },
+ "./dist/server/framework-preset-angular-ivy": {
+ "types": "./dist/server/framework-preset-angular-ivy.d.ts",
+ "import": "./dist/server/framework-preset-angular-ivy.mjs",
+ "require": "./dist/server/framework-preset-angular-ivy.js"
+ },
+ "./package.json": "./package.json"
+ },
+ "main": "dist/index.js",
+ "module": "dist/index.mjs",
+ "types": "dist/index.d.ts",
+ "files": [
+ "dist/**/*",
+ "README.md",
+ "*.js",
+ "*.d.ts",
+ "!src/**/*"
+ ],
+ "scripts": {
+ "check": "jiti ../../../scripts/prepare/check.ts",
+ "prep": "jiti ../../../scripts/prepare/bundle.ts"
+ },
+ "dependencies": {
+ "@storybook/components": "workspace:*",
+ "@storybook/core-webpack": "workspace:*",
+ "@storybook/global": "^5.0.0",
+ "@storybook/manager-api": "workspace:*",
+ "@storybook/preset-angular-webpack": "workspace:*",
greptile
logic: circular dependency - package depends on itself which will cause installation issues
diff block
+export class RegisterResponseModel {
+ public readonly accessToken: string;
+
+ constructor(data: RegisterResponseModel) {
greptile
style: Constructor parameter type creates circular dependency - consider using an interface instead
suggested fix
+ constructor(data: { accessToken: string }) {
diff block
+/**
+ * Client for communicating with a Harmony Hub.
+ * Handles connection, command execution, activity management, and state caching.
+ * @module
+ */
+
+import getHarmonyClient from "@harmonyhub/client-ws";
+import { getPreferenceValues, LocalStorage } from "@raycast/api";
+
+import { HarmonyError, ErrorCategory } from "../../types/core/errors";
+import {
+ HarmonyHub,
+ HarmonyDevice,
+ HarmonyActivity,
+ HarmonyCommand,
+ isHarmonyDevice,
+ isHarmonyActivity,
+} from "../../types/core/harmony";
+import { debug, error, info, warn } from "../logger";
+
+/** Cache expiration time in milliseconds (24 hours) */
+const CACHE_EXPIRY = 24 * 60 * 60 * 1000;
+
+/**
+ * Interface for cached hub configuration
+ * @interface CachedConfig
+ */
+interface CachedConfig {
+ /** List of devices associated with the hub */
+ devices: HarmonyDevice[];
+ /** List of activities configured on the hub */
+ activities: HarmonyActivity[];
+ /** Timestamp when the cache was created */
+ timestamp: number;
+}
+
+/**
+ * Interface for command execution body
+ * @interface HarmonyCommandBody
+ */
+interface HarmonyCommandBody {
+ /** Command identifier */
+ command: string;
+ /** Target device identifier */
+ deviceId: string;
+ /** Command type (e.g., "IRCommand") */
+ type: string;
+}
+
+/**
+ * Interface for command function configuration
+ * @interface CommandFunction
+ */
+interface CommandFunction {
+ /** Function name */
+ name: string;
+ /** Display label */
+ label?: string;
+ /** Command action configuration */
+ action?: {
+ /** Command identifier */
+ command?: string;
+ };
+}
+
+/**
+ * Interface for device control group
+ * @interface ControlGroup
+ */
+interface ControlGroup {
+ /** Group name */
+ name: string;
+ /** List of functions in this group */
+ function: CommandFunction[];
+}
+
+/**
+ * Interface for raw device data from hub
+ * @interface RawDevice
+ */
+interface RawDevice {
+ /** Device identifier */
+ id: string;
+ /** Display label */
+ label?: string;
+ /** Device type */
+ type?: string;
+ /** List of control groups */
+ controlGroup: ControlGroup[];
+}
+
+/**
+ * Interface for raw hub configuration
+ * @interface RawConfig
+ */
+interface RawConfig {
+ /** List of devices */
+ device: RawDevice[];
+}
+
+/**
+ * Interface for raw activity data from hub
+ * @interface RawActivity
+ */
+interface RawActivity {
+ /** Activity identifier */
+ id: string;
+ /** Activity display label */
+ label: string;
+ /** Activity type */
+ type: string;
+}
+
+/**
+ * Client for communicating with a Harmony Hub
+ * Handles connection, command execution, and activity management
+ */
+export class HarmonyClient {
+ /** Map of active clients by hub ID */
+ private static activeClients: Map<string, HarmonyClient> = new Map();
+
+ /** Get or create a client for a hub */
+ public static getClient(hub: HarmonyHub): HarmonyClient {
+ const existingClient = this.activeClients.get(hub.hubId);
+ if (existingClient) {
+ debug(`Reusing existing client for hub ${hub.name}`);
+ return existingClient;
+ }
+
+ debug(`Creating new client for hub ${hub.name}`);
+ const client = new HarmonyClient(hub);
+ this.activeClients.set(hub.hubId, client);
+ return client;
+ }
+
+ /** Connected client instance */
+ private client: Awaited<ReturnType<typeof getHarmonyClient>> | null = null;
+ /** Connection state */
+ private isConnected = false;
+ /** The hub this client is connected to */
+ public readonly hub: HarmonyHub;
+ /** Cache key for this hub's configuration */
+ private cacheKey: string;
+
+ /**
+ * Creates a new HarmonyClient instance
+ * @param hub - The Harmony Hub to connect to
+ */
+ private constructor(hub: HarmonyHub) {
+ this.hub = hub;
+ this.cacheKey = `harmony-config-${hub.hubId}`;
+ }
+
+ /**
+ * Check if the client is currently connected
+ * @returns True if connected, false otherwise
+ */
+ public isClientConnected(): boolean {
+ return this.isConnected;
+ }
+
+ /**
+ * Connects to the Harmony Hub and retrieves its configuration.
+ * Establishes WebSocket connection and verifies connectivity by fetching initial config.
+ * Sets up disconnect handler and validates connection state.
+ * @throws {HarmonyError} If connection fails or initial config cannot be retrieved
+ */
+ public async connect(): Promise<void> {
+ if (this.isConnected) {
+ info(`Already connected to hub ${this.hub.name}`);
+ return;
+ }
+
+ try {
+ info(`Initiating connection to hub ${this.hub.name} (${this.hub.ip})`);
+
+ // Create client with remoteId if available for faster connection
+ debug("Creating Harmony client", {
+ ip: this.hub.ip,
+ port: this.hub.port,
+ remoteId: this.hub.remoteId,
+ });
+
+ this.client = await getHarmonyClient(this.hub.ip);
+
+ // Setup disconnect handler before setting connected state
+ this.client?.on("disconnected", () => {
+ warn(`Disconnected from hub ${this.hub.name}`);
+ this.isConnected = false;
+ // Remove from active clients on disconnect
+ HarmonyClient.activeClients.delete(this.hub.hubId);
+ });
+
+ // Verify connection by attempting to get config
+ debug("Verifying connection by fetching initial config");
+
+ // Load both devices and activities to ensure proper caching
+ const [devices, activities] = await Promise.all([this.getDevicesFromHub(), this.getActivitiesFromHub()]);
+
+ // Only set connected state after successful verification
+ this.isConnected = true;
+ info(`Successfully connected to hub ${this.hub.name}`);
+
+ // Cache the complete config
+ await this.updateConfigCache(devices, activities);
+
+ debug("Initial config loaded and cached", {
+ hubName: this.hub.name,
+ deviceCount: devices.length,
+ activityCount: activities.length,
+ });
+
+ return;
+ } catch (err) {
+ this.isConnected = false;
+ this.client = null;
+ // Remove from active clients on error
+ HarmonyClient.activeClients.delete(this.hub.hubId);
+ const harmonyError = new HarmonyError(
+ "Failed to connect to hub",
+ ErrorCategory.HUB_COMMUNICATION,
+ err instanceof Error ? err : undefined
+ );
+ error("Connection failed", { error: harmonyError.getDetailedMessage() });
+ throw harmonyError;
+ }
+ }
+
+ /**
+ * Retrieves the list of devices from the hub.
+ * Attempts to load from cache first, falls back to hub query if cache is invalid.
+ * @returns Promise resolving to list of devices
+ * @throws {HarmonyError} If retrieving devices fails or hub is not connected
+ */
+ public async getDevices(): Promise<HarmonyDevice[]> {
+ if (!this.client || !this.isConnected) {
+ throw new HarmonyError("Not connected to hub", ErrorCategory.STATE);
+ }
+
+ try {
+ // Try to get from cache first
+ const cached = await this.getCachedConfig();
+ if (cached?.devices) {
+ debug("Using cached devices for hub", {
+ hubName: this.hub.name,
+ deviceCount: cached.devices.length,
+ });
+ return cached.devices;
+ }
+
+ debug("No cached devices found, fetching from hub");
+ const devices = await this.getDevicesFromHub();
+
+ // Cache the new devices
+ await this.updateConfigCache(devices, await this.getActivitiesFromHub());
+
+ return devices;
+ } catch (err) {
+ throw new HarmonyError(
+ "Failed to get devices",
+ ErrorCategory.HUB_COMMUNICATION,
+ err instanceof Error ? err : undefined
+ );
+ }
+ }
+
+ /**
+ * Gets devices directly from the hub via WebSocket.
+ * @returns Promise resolving to list of mapped HarmonyDevice objects
+ * @throws {HarmonyError} If client not initialized or hub communication fails
+ * @private
+ */
+ private async getDevicesFromHub(): Promise<HarmonyDevice[]> {
+ if (!this.client) {
+ throw new HarmonyError("Client not initialized", ErrorCategory.CONNECTION);
+ }
+
+ try {
+ debug("Fetching devices from hub", { hubName: this.hub.name });
+ const rawConfig = (await this.client.getAvailableCommands()) as RawConfig;
+
+ if (!rawConfig.device || rawConfig.device.length === 0) {
+ warn("No devices found in hub config");
+ return [];
+ }
+
+ debug(`Found ${rawConfig.device.length} devices`);
+
+ const mappedDevices = rawConfig.device.map((device) => {
+ const mappedDevice = {
+ id: device.id,
+ name: device.label || device.id,
+ type: device.type || "Unknown",
+ commands: device.controlGroup.flatMap((group) =>
+ group.function.map((fn) => ({
+ id: fn.name,
+ name: fn.name,
+ label: fn.label || fn.name,
+ deviceId: device.id,
+ group: fn.action?.command || "IRCommand",
+ }))
+ ),
+ } as HarmonyDevice;
+
+ // Validate mapped device
+ if (!isHarmonyDevice(mappedDevice)) {
+ error("Invalid device mapping", { device, mappedDevice });
+ throw new HarmonyError(`Invalid device mapping for ${device.id}`, ErrorCategory.VALIDATION);
+ }
+
+ return mappedDevice;
+ });
+
+ debug("Successfully mapped devices", {
+ deviceCount: mappedDevices.length,
+ commandCounts: mappedDevices.map((d) => ({
+ deviceId: d.id,
+ commandCount: d.commands.length,
+ })),
+ });
+
+ return mappedDevices;
+ } catch (err) {
+ throw new HarmonyError(
+ "Failed to get devices from hub",
+ ErrorCategory.HUB_COMMUNICATION,
+ err instanceof Error ? err : new Error(String(err))
+ );
+ }
+ }
+
+ /**
+ * Retrieves the list of activities from the hub.
+ * Attempts to load from cache first, falls back to hub query if cache is invalid.
+ * @returns Promise resolving to list of activities
+ * @throws {HarmonyError} If retrieving activities fails or hub is not connected
+ */
+ public async getActivities(): Promise<HarmonyActivity[]> {
+ if (!this.client || !this.isConnected) {
+ throw new HarmonyError("Not connected to hub", ErrorCategory.STATE);
+ }
+
+ try {
+ // Try to get from cache first
+ const cached = await this.getCachedConfig();
+ if (cached?.activities) {
+ debug("Using cached activities for hub", {
+ hubName: this.hub.name,
+ activityCount: cached.activities.length,
+ });
+ return cached.activities;
+ }
+
+ debug("No cached activities found, fetching from hub", {
+ hubName: this.hub.name,
+ });
+ const activities = await this.getActivitiesFromHub();
+
+ // Cache the new activities along with current devices
+ await this.updateConfigCache(await this.getDevices(), activities);
+
+ return activities;
+ } catch (err) {
+ throw new HarmonyError(
+ "Failed to get activities",
+ ErrorCategory.HUB_COMMUNICATION,
+ err instanceof Error ? err : undefined
+ );
+ }
+ }
+
+ /**
+ * Gets the currently running activity.
+ * Queries the hub for current activity and matches it against known activities.
+ * @returns Promise resolving to current activity or null if none
+ * @throws {HarmonyError} If retrieving current activity fails or hub is not connected
+ */
+ public async getCurrentActivity(): Promise<HarmonyActivity | null> {
+ if (!this.client || !this.isConnected) {
+ throw new HarmonyError("Not connected to hub", ErrorCategory.STATE);
+ }
+
+ try {
+ const rawActivity = await this.client.getCurrentActivity();
+ debug("Got current activity from hub", { rawActivity });
+
+ if (!rawActivity) {
+ return null;
+ }
+
+ // Convert raw activity string to proper type
+ const currentActivityId = String(rawActivity);
+
+ // Get all activities to find the current one
+ const activities = await this.getActivities();
+ const activity = activities.find((a) => a.id === currentActivityId);
+
+ if (!activity) {
+ warn("Current activity not found in activity list", {
+ currentActivityId,
+ availableActivities: activities.map((a) => a.id),
+ });
+ return null;
+ }
+
+ const currentActivity = {
+ ...activity,
+ isCurrent: true,
+ };
+
+ // Validate current activity
+ if (!isHarmonyActivity(currentActivity)) {
+ error("Invalid current activity", { currentActivity });
+ throw new HarmonyError("Invalid current activity data", ErrorCategory.VALIDATION);
+ }
+
+ return currentActivity;
+ } catch (err) {
+ throw new HarmonyError(
+ "Failed to get current activity",
+ ErrorCategory.HUB_COMMUNICATION,
+ err instanceof Error ? err : new Error(String(err))
+ );
+ }
+ }
+
+ /**
+ * Starts an activity by ID.
+ * Initiates the activity and waits for confirmation of successful start.
+ * @param activityId - ID of the activity to start
+ * @throws {HarmonyError} If starting activity fails or hub is not connected
+ */
+ public async startActivity(activityId: string): Promise<void> {
+ if (!this.client || !this.isConnected) {
+ throw new HarmonyError("Not connected to hub", ErrorCategory.STATE);
+ }
+
+ try {
+ debug("Starting activity", { activityId });
+
+ await this.client.startActivity(activityId);
+
+ // Wait for activity to start and verify
+ const startTime = Date.now();
+ const maxWaitTime = 10000; // 10 seconds max wait
+
+ while (Date.now() - startTime < maxWaitTime) {
+ const currentActivity = await this.getCurrentActivity();
+ if (currentActivity?.id === activityId) {
+ debug("Activity started successfully", { activityId });
+ return;
+ }
+ await new Promise((resolve) => setTimeout(resolve, 500));
+ }
+
+ throw new Error("Timeout waiting for activity to start");
+ } catch (err) {
+ throw new HarmonyError(
+ `Failed to start activity ${activityId}`,
+ ErrorCategory.COMMAND_EXECUTION,
+ err instanceof Error ? err : undefined
+ );
+ }
+ }
+
+ /**
+ * Stops the current activity.
+ * Sends stop command and waits for confirmation of successful stop.
+ * @throws {HarmonyError} If stopping activity fails or hub is not connected
+ */
+ public async stopActivity(): Promise<void> {
+ if (!this.client || !this.isConnected) {
+ throw new HarmonyError("Not connected to hub", ErrorCategory.STATE);
+ }
+
+ try {
+ debug("Stopping current activity");
+
+ const currentActivity = await this.getCurrentActivity();
+ if (!currentActivity) {
+ debug("No activity running");
+ return;
+ }
+
+ await this.client.turnOff();
+
+ // Wait for activity to stop and verify
+ const startTime = Date.now();
+ const maxWaitTime = 10000; // 10 seconds max wait
+
+ while (Date.now() - startTime < maxWaitTime) {
+ const activity = await this.getCurrentActivity();
+ if (!activity) {
+ debug("Activity stopped successfully");
+ return;
+ }
+ await new Promise((resolve) => setTimeout(resolve, 500));
+ }
+
+ throw new Error("Timeout waiting for activity to stop");
+ } catch (err) {
+ throw new HarmonyError(
+ "Failed to stop activity",
+ ErrorCategory.COMMAND_EXECUTION,
+ err instanceof Error ? err : undefined
+ );
+ }
+ }
+
+ /**
+ * Gets cached configuration if available.
+ * Checks cache validity and expiration.
+ * @returns Promise resolving to cached configuration or null
+ * @throws {HarmonyError} If reading cache fails
+ * @private
+ */
+ private async getCachedConfig(): Promise<CachedConfig | null> {
+ try {
+ const cached = await LocalStorage.getItem<string>(this.cacheKey);
+ if (!cached) {
+ debug("No cache found for hub", { hubName: this.hub.name });
+ return null;
+ }
+
+ const config = JSON.parse(cached) as CachedConfig;
+
+ // Validate cache has required data
+ if (!config.devices?.length || !config.activities?.length) {
+ debug("Cache invalid - missing data", {
+ hubName: this.hub.name,
+ hasDevices: !!config.devices?.length,
+ hasActivities: !!config.activities?.length,
+ });
+ await LocalStorage.removeItem(this.cacheKey);
+ return null;
+ }
+
+ // Check if cache is expired
+ if (Date.now() - config.timestamp > CACHE_EXPIRY) {
+ info("Config cache expired for hub", this.hub.name);
+ await LocalStorage.removeItem(this.cacheKey);
+ return null;
+ }
+
+ debug("Using valid cache for hub", {
+ hubName: this.hub.name,
+ deviceCount: config.devices.length,
+ activityCount: config.activities.length,
+ age: Math.round((Date.now() - config.timestamp) / 1000) + "s",
+ });
+
+ return config;
+ } catch (err) {
+ warn("Failed to get cached config:", err);
+ return null;
+ }
+ }
+
+ /**
+ * Update the config cache with new devices and activities.
+ * @param devices - List of devices to cache
+ * @param activities - List of activities to cache
+ * @private
+ */
+ private async updateConfigCache(devices: HarmonyDevice[], activities: HarmonyActivity[]): Promise<void> {
+ try {
+ // Validate we have data to cache
+ if (!devices.length && !activities.length) {
+ debug("Skipping cache update - no data to cache", {
+ hubName: this.hub.name,
+ });
+ return;
+ }
+
+ const cache: CachedConfig = {
+ devices: devices.length ? devices : await this.getDevices(),
+ activities: activities.length ? activities : await this.getActivitiesFromHub(),
+ timestamp: Date.now(),
greptile
logic: Potential circular dependency issue here. If devices.length is 0, this calls getDevices() which might call getActivitiesFromHub() again, which could lead to infinite recursion in error cases
diff block
+{
+ "$schema": "https://turbo.build/schema.json",
+ "extends": ["//"],
+ "tasks": {
+ "build": {
+ "dependsOn": ["@posthog/frontend#prepare"]
+ },
+ "prepare": {
+ "dependsOn": ["@posthog/frontend#prepare"]
greptile
logic: Having prepare also depend on frontend#prepare could create a circular dependency if frontend ever needs to prepare storybook.
diff block
+/* eslint-disable no-restricted-imports */
+import { forwardRef, Module } from '@nestjs/common';
+
+import { NestjsQueryGraphQLModule } from '@ptc-org/nestjs-query-graphql';
+import { NestjsQueryTypeOrmModule } from '@ptc-org/nestjs-query-typeorm';
+
+import { TypeORMModule } from 'src/database/typeorm/typeorm.module';
+import { TypeORMService } from 'src/database/typeorm/typeorm.service';
+import { EnvironmentModule } from 'src/engine/core-modules/environment/environment.module';
+import { EnvironmentService } from 'src/engine/core-modules/environment/environment.service';
+import { PabxService } from 'src/engine/core-modules/telephony/services/pabx.service';
+import { TelephonyService } from 'src/engine/core-modules/telephony/services/telephony.service';
+import { Telephony } from 'src/engine/core-modules/telephony/telephony.entity';
+import { TelephonyResolver } from 'src/engine/core-modules/telephony/telephony.resolver';
+import { Workspace } from 'src/engine/core-modules/workspace/workspace.entity';
+import { WorkspaceModule } from 'src/engine/core-modules/workspace/workspace.module';
+import { DataSourceModule } from 'src/engine/metadata-modules/data-source/data-source.module';
+
+@Module({
+ imports: [
+ NestjsQueryGraphQLModule.forFeature({
+ imports: [
+ NestjsQueryTypeOrmModule.forFeature([Telephony, Workspace], 'core'),
+ TypeORMModule,
+ ],
+ }),
+ DataSourceModule,
+ forwardRef(() => WorkspaceModule),
greptile
style: circular dependency with WorkspaceModule detected - consider restructuring to avoid forwardRef
Want to avoid this bug in your codebase? Try Greptile.
Avoid this bug!