Skip to content

Feature: Board of Directors Consensus Mechanism#2

Open
google-labs-jules[bot] wants to merge 1 commit intofeat-cli-service-optionsfrom
feat/board-of-directors
Open

Feature: Board of Directors Consensus Mechanism#2
google-labs-jules[bot] wants to merge 1 commit intofeat-cli-service-optionsfrom
feat/board-of-directors

Conversation

@google-labs-jules
Copy link
Copy Markdown

This PR introduces a "Board of Directors" feature that allows users to get a consensus on a topic from 5 different AI personas. The agents generate proposals, which are then anonymized and voted on by the group to determine the best approach. This feature is accessible via the bin/coding board command.


PR created automatically by Jules for task 6859861011012982519 started by @davidraehles

- Implements a 'Board of Directors' workflow with 5 distinct AI personas (Pragmatist, Architect, Security Officer, Performance Zealot, User Advocate).
- Adds `bin/coding board` command to trigger the consensus process.
- Implements `BoardMeeting` class for orchestration (Proposal -> Anonymization -> Voting -> Tally).
- Adds `LLMClient` for lightweight API interaction (Anthropic/OpenAI).
- Adds unit tests for the voting logic.
@google-labs-jules
Copy link
Copy Markdown
Author

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!


For security, I will only act on instructions from the user who triggered this task.

New to Jules? Learn more at jules.google/docs.

@davidraehles davidraehles marked this pull request as ready for review December 1, 2025 14:37
Copilot AI review requested due to automatic review settings December 1, 2025 14:37
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a "Board of Directors" consensus mechanism that simulates a meeting of 5 AI personas with different technical perspectives (Pragmatist, Architect, Security Officer, Performance Zealot, and User Advocate). The system generates proposals from each director, anonymizes them, collects votes, and determines the winning approach through consensus.

Key changes:

  • Implements a BoardMeeting orchestration class that manages the full consensus workflow
  • Adds an LLMClient wrapper supporting both Anthropic and OpenAI APIs
  • Provides CLI access via bin/coding board or bin/coding consensus commands

Reviewed changes

Copilot reviewed 7 out of 8 changed files in this pull request and generated 10 comments.

Show a summary per file
File Description
lib/consensus/board-meeting.js Core orchestration logic for conducting meetings, generating proposals, anonymizing them, collecting votes, and tallying results
lib/consensus/llm-client.js Lightweight LLM client wrapper supporting both Anthropic Claude and OpenAI GPT-4 APIs
lib/consensus/directors.js Configuration of 5 AI director personas with distinct system prompts and roles
scripts/run-board-meeting.js CLI script that handles user input, validates API keys, and displays meeting results
tests/unit/BoardMeeting.test.js Unit tests covering proposal generation, anonymization, and vote tallying
bin/coding Adds board and consensus command aliases to the main CLI
.eslintrc.cjs Basic ESLint configuration for the project
test/setup.js Minimal test setup file for Jest
Comments suppressed due to low confidence (1)

lib/consensus/llm-client.js:1

  • Unused import spawn.
import { spawn } from 'child_process';

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review for a chance to win a $100 gift card. Take the survey.

Comment thread .eslintrc.cjs
@@ -0,0 +1 @@
module.exports = { extends: "eslint:recommended", env: { node: true, es6: true }, parserOptions: { ecmaVersion: 2022, sourceType: "module" } };
Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ESLint configuration is written on a single line, making it hard to read and maintain. Consider formatting it properly:

module.exports = {
  extends: "eslint:recommended",
  env: {
    node: true,
    es6: true
  },
  parserOptions: {
    ecmaVersion: 2022,
    sourceType: "module"
  }
};
Suggested change
module.exports = { extends: "eslint:recommended", env: { node: true, es6: true }, parserOptions: { ecmaVersion: 2022, sourceType: "module" } };
module.exports = {
extends: "eslint:recommended",
env: {
node: true,
es6: true
},
parserOptions: {
ecmaVersion: 2022,
sourceType: "module"
}
};

Copilot uses AI. Check for mistakes.
Comment on lines +59 to +65
body: JSON.stringify({
model: 'gpt-4o',
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userPrompt }
]
})
Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The OpenAI API call doesn't specify a max_tokens parameter, while the Anthropic call sets it to 4000. For consistency and to ensure predictable behavior, consider adding a max_tokens limit to the OpenAI request:

body: JSON.stringify({
  model: 'gpt-4o',
  max_tokens: 4000,
  messages: [
    { role: 'system', content: systemPrompt },
    { role: 'user', content: userPrompt }
  ]
})

Copilot uses AI. Check for mistakes.
const winner = anonymizedProposals.find(p => p.id === winnerId);

return {
topic: "",
Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The topic field in the return object is an empty string, but it should be populated with the actual topic that was passed to the method. This could be useful for debugging or logging purposes.

Consider changing:

return {
  topic: topic,  // or pass topic as parameter to tallyVotes
  winner: winner,
  scores: scores,
  votes: validVotes,
  proposals: anonymizedProposals
};

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +2
import { spawn } from 'child_process';

Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The spawn import from 'child_process' is not used anywhere in this file and should be removed.

Suggested change
import { spawn } from 'child_process';

Copilot uses AI. Check for mistakes.
Comment on lines +8 to +76
describe('BoardMeeting', () => {
let meeting;
let mockLLM;

beforeEach(() => {
// Setup mock implementation
mockLLM = {
complete: jest.fn()
};

// In ESM mocking with Jest, we might need to handle the mock differently
// if LLMClient is the default export.
// However, since we are mocking the module, we assume the constructor calls will rely on the mocked class.

// For this specific test, we can just spy on the methods or ensure the class is mocked correctly.
// Given the error "mockClear is not a function", it seems LLMClient isn't being treated as a Jest mock function automatically in this ESM context.

// Workaround: Mock the instance method on the prototype if strict mocking fails
// But better: Use the mocked module instance.
});

test('generateProposals should call LLM for each director', async () => {
// Manually injecting mock into the meeting instance for this test
// to bypass module mocking complexity in ESM/Jest for now.
meeting = new BoardMeeting();
meeting.llm = mockLLM;
mockLLM.complete.mockResolvedValue('A proposal');

const proposals = await meeting.generateProposals('Topic');

expect(proposals.length).toBe(5);
expect(mockLLM.complete).toHaveBeenCalledTimes(5);
expect(proposals[0].content).toBe('A proposal');
});

test('anonymizeProposals should assign IDs', () => {
meeting = new BoardMeeting();
const proposals = [
{ director: 'Dir 1', content: 'Content 1' },
{ director: 'Dir 2', content: 'Content 2' }
];

const result = meeting.anonymizeProposals(proposals);

expect(result[0].id).toBe('A');
expect(result[1].id).toBe('B');
expect(result[0].originalDirector).toBe('Dir 1');
});

test('tallyVotes should determine the winner', () => {
meeting = new BoardMeeting();
const anonymizedProposals = [
{ id: 'A', originalDirector: 'Dir 1', content: 'Content 1' },
{ id: 'B', originalDirector: 'Dir 2', content: 'Content 2' }
];

const votes = [
{ voter: 'Voter 1', choice: 'A', reason: 'Good' },
{ voter: 'Voter 2', choice: 'A', reason: 'Better' },
{ voter: 'Voter 3', choice: 'B', reason: 'Okay' }
];

const result = meeting.tallyVotes(votes, anonymizedProposals);

expect(result.winner.id).toBe('A');
expect(result.scores['A']).toBe(2);
expect(result.scores['B']).toBe(1);
});
});
Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test file is missing coverage for several important methods:

  • conductMeeting (the main orchestration method)
  • collectVotes (handles voting logic and director exclusion)
  • parseVote (parses vote responses from LLM)

Additionally, error handling scenarios are not tested (e.g., when LLM calls fail).

Consider adding tests for these methods to ensure the complete workflow is properly tested.

Copilot uses AI. Check for mistakes.
Comment on lines +18 to +26
// In ESM mocking with Jest, we might need to handle the mock differently
// if LLMClient is the default export.
// However, since we are mocking the module, we assume the constructor calls will rely on the mocked class.

// For this specific test, we can just spy on the methods or ensure the class is mocked correctly.
// Given the error "mockClear is not a function", it seems LLMClient isn't being treated as a Jest mock function automatically in this ESM context.

// Workaround: Mock the instance method on the prototype if strict mocking fails
// But better: Use the mocked module instance.
Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These lengthy explanatory comments (lines 18-26) about ESM mocking challenges don't add value to the test file and make it harder to read. Since the tests use manual mock injection (meeting.llm = mockLLM), these comments are unnecessary.

Consider removing these comments or replacing them with a brief note if needed.

Suggested change
// In ESM mocking with Jest, we might need to handle the mock differently
// if LLMClient is the default export.
// However, since we are mocking the module, we assume the constructor calls will rely on the mocked class.
// For this specific test, we can just spy on the methods or ensure the class is mocked correctly.
// Given the error "mockClear is not a function", it seems LLMClient isn't being treated as a Jest mock function automatically in this ESM context.
// Workaround: Mock the instance method on the prototype if strict mocking fails
// But better: Use the mocked module instance.
// Manual mock injection is used due to ESM/Jest mocking limitations.

Copilot uses AI. Check for mistakes.
Comment on lines +123 to +141
// Find winner
let maxScore = -1;
let winnerId = null;

Object.entries(scores).forEach(([id, score]) => {
if (score > maxScore) {
maxScore = score;
winnerId = id;
}
});

const winner = anonymizedProposals.find(p => p.id === winnerId);

return {
topic: "",
winner: winner,
scores: scores,
votes: validVotes,
proposals: anonymizedProposals
Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The winner determination logic doesn't handle ties properly. When multiple proposals have the same maximum score, the first one encountered in the iteration order will always win. This could lead to unexpected results in tie situations.

Consider either:

  1. Detecting and reporting ties explicitly
  2. Implementing a tie-breaking mechanism (e.g., random selection, reporting all tied proposals)
Suggested change
// Find winner
let maxScore = -1;
let winnerId = null;
Object.entries(scores).forEach(([id, score]) => {
if (score > maxScore) {
maxScore = score;
winnerId = id;
}
});
const winner = anonymizedProposals.find(p => p.id === winnerId);
return {
topic: "",
winner: winner,
scores: scores,
votes: validVotes,
proposals: anonymizedProposals
// Find winners (handle ties)
let maxScore = -1;
Object.values(scores).forEach(score => {
if (score > maxScore) {
maxScore = score;
}
});
// Collect all proposal IDs with maxScore
const winnerIds = Object.entries(scores)
.filter(([id, score]) => score === maxScore)
.map(([id, score]) => id);
const winners = anonymizedProposals.filter(p => winnerIds.includes(p.id));
return {
topic: "",
winners: winners,
scores: scores,
votes: validVotes,
proposals: anonymizedProposals,
tie: winners.length > 1

Copilot uses AI. Check for mistakes.
`;
console.log(` - ${director.name} is voting...`);
const response = await this.llm.complete(director.systemPrompt, prompt);
return this.parseVote(response, director.name);
Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code instructs directors not to vote for their own proposal (line 79), but doesn't enforce this constraint. A director could still vote for their own proposal if the LLM doesn't follow instructions properly.

Consider adding validation after parsing the vote:

const vote = this.parseVote(response, director.name);
if (vote.choice === ownId) {
  console.warn(`${director.name} attempted to vote for their own proposal. Invalidating vote.`);
  return null;
}
return vote;
Suggested change
return this.parseVote(response, director.name);
const vote = this.parseVote(response, director.name);
if (vote.choice === ownId) {
console.warn(`${director.name} attempted to vote for their own proposal. Invalidating vote.`);
return null;
}
return vote;

Copilot uses AI. Check for mistakes.

anonymizeProposals(proposals) {
return proposals.map((p, index) => ({
id: String.fromCharCode(65 + index), // A, B, C, D, E
Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ID generation using String.fromCharCode(65 + index) assumes there will always be 5 or fewer proposals. If the system is extended to support more directors in the future, this could produce unexpected results (e.g., '[', '\', etc. after 'Z').

Consider adding a check or using a more scalable ID generation approach:

if (index > 25) {
  throw new Error('Too many proposals for single-letter IDs');
}

Or use a different scheme like: 'P1', 'P2', etc.

Suggested change
id: String.fromCharCode(65 + index), // A, B, C, D, E
id: `P${index + 1}`, // P1, P2, P3, ...

Copilot uses AI. Check for mistakes.
Comment on lines +3 to +4
import LLMClient from '../../lib/consensus/llm-client.js';

Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused import LLMClient.

Suggested change
import LLMClient from '../../lib/consensus/llm-client.js';

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant