Feature: Board of Directors Consensus Mechanism#2
Feature: Board of Directors Consensus Mechanism#2google-labs-jules[bot] wants to merge 1 commit intofeat-cli-service-optionsfrom
Conversation
- Implements a 'Board of Directors' workflow with 5 distinct AI personas (Pragmatist, Architect, Security Officer, Performance Zealot, User Advocate). - Adds `bin/coding board` command to trigger the consensus process. - Implements `BoardMeeting` class for orchestration (Proposal -> Anonymization -> Voting -> Tally). - Adds `LLMClient` for lightweight API interaction (Anthropic/OpenAI). - Adds unit tests for the voting logic.
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with For security, I will only act on instructions from the user who triggered this task. New to Jules? Learn more at jules.google/docs. |
There was a problem hiding this comment.
Pull request overview
This PR introduces a "Board of Directors" consensus mechanism that simulates a meeting of 5 AI personas with different technical perspectives (Pragmatist, Architect, Security Officer, Performance Zealot, and User Advocate). The system generates proposals from each director, anonymizes them, collects votes, and determines the winning approach through consensus.
Key changes:
- Implements a BoardMeeting orchestration class that manages the full consensus workflow
- Adds an LLMClient wrapper supporting both Anthropic and OpenAI APIs
- Provides CLI access via
bin/coding boardorbin/coding consensuscommands
Reviewed changes
Copilot reviewed 7 out of 8 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| lib/consensus/board-meeting.js | Core orchestration logic for conducting meetings, generating proposals, anonymizing them, collecting votes, and tallying results |
| lib/consensus/llm-client.js | Lightweight LLM client wrapper supporting both Anthropic Claude and OpenAI GPT-4 APIs |
| lib/consensus/directors.js | Configuration of 5 AI director personas with distinct system prompts and roles |
| scripts/run-board-meeting.js | CLI script that handles user input, validates API keys, and displays meeting results |
| tests/unit/BoardMeeting.test.js | Unit tests covering proposal generation, anonymization, and vote tallying |
| bin/coding | Adds board and consensus command aliases to the main CLI |
| .eslintrc.cjs | Basic ESLint configuration for the project |
| test/setup.js | Minimal test setup file for Jest |
Comments suppressed due to low confidence (1)
lib/consensus/llm-client.js:1
- Unused import spawn.
import { spawn } from 'child_process';
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review for a chance to win a $100 gift card. Take the survey.
| @@ -0,0 +1 @@ | |||
| module.exports = { extends: "eslint:recommended", env: { node: true, es6: true }, parserOptions: { ecmaVersion: 2022, sourceType: "module" } }; | |||
There was a problem hiding this comment.
The ESLint configuration is written on a single line, making it hard to read and maintain. Consider formatting it properly:
module.exports = {
extends: "eslint:recommended",
env: {
node: true,
es6: true
},
parserOptions: {
ecmaVersion: 2022,
sourceType: "module"
}
};| module.exports = { extends: "eslint:recommended", env: { node: true, es6: true }, parserOptions: { ecmaVersion: 2022, sourceType: "module" } }; | |
| module.exports = { | |
| extends: "eslint:recommended", | |
| env: { | |
| node: true, | |
| es6: true | |
| }, | |
| parserOptions: { | |
| ecmaVersion: 2022, | |
| sourceType: "module" | |
| } | |
| }; |
| body: JSON.stringify({ | ||
| model: 'gpt-4o', | ||
| messages: [ | ||
| { role: 'system', content: systemPrompt }, | ||
| { role: 'user', content: userPrompt } | ||
| ] | ||
| }) |
There was a problem hiding this comment.
The OpenAI API call doesn't specify a max_tokens parameter, while the Anthropic call sets it to 4000. For consistency and to ensure predictable behavior, consider adding a max_tokens limit to the OpenAI request:
body: JSON.stringify({
model: 'gpt-4o',
max_tokens: 4000,
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userPrompt }
]
})| const winner = anonymizedProposals.find(p => p.id === winnerId); | ||
|
|
||
| return { | ||
| topic: "", |
There was a problem hiding this comment.
The topic field in the return object is an empty string, but it should be populated with the actual topic that was passed to the method. This could be useful for debugging or logging purposes.
Consider changing:
return {
topic: topic, // or pass topic as parameter to tallyVotes
winner: winner,
scores: scores,
votes: validVotes,
proposals: anonymizedProposals
};| import { spawn } from 'child_process'; | ||
|
|
There was a problem hiding this comment.
The spawn import from 'child_process' is not used anywhere in this file and should be removed.
| import { spawn } from 'child_process'; |
| describe('BoardMeeting', () => { | ||
| let meeting; | ||
| let mockLLM; | ||
|
|
||
| beforeEach(() => { | ||
| // Setup mock implementation | ||
| mockLLM = { | ||
| complete: jest.fn() | ||
| }; | ||
|
|
||
| // In ESM mocking with Jest, we might need to handle the mock differently | ||
| // if LLMClient is the default export. | ||
| // However, since we are mocking the module, we assume the constructor calls will rely on the mocked class. | ||
|
|
||
| // For this specific test, we can just spy on the methods or ensure the class is mocked correctly. | ||
| // Given the error "mockClear is not a function", it seems LLMClient isn't being treated as a Jest mock function automatically in this ESM context. | ||
|
|
||
| // Workaround: Mock the instance method on the prototype if strict mocking fails | ||
| // But better: Use the mocked module instance. | ||
| }); | ||
|
|
||
| test('generateProposals should call LLM for each director', async () => { | ||
| // Manually injecting mock into the meeting instance for this test | ||
| // to bypass module mocking complexity in ESM/Jest for now. | ||
| meeting = new BoardMeeting(); | ||
| meeting.llm = mockLLM; | ||
| mockLLM.complete.mockResolvedValue('A proposal'); | ||
|
|
||
| const proposals = await meeting.generateProposals('Topic'); | ||
|
|
||
| expect(proposals.length).toBe(5); | ||
| expect(mockLLM.complete).toHaveBeenCalledTimes(5); | ||
| expect(proposals[0].content).toBe('A proposal'); | ||
| }); | ||
|
|
||
| test('anonymizeProposals should assign IDs', () => { | ||
| meeting = new BoardMeeting(); | ||
| const proposals = [ | ||
| { director: 'Dir 1', content: 'Content 1' }, | ||
| { director: 'Dir 2', content: 'Content 2' } | ||
| ]; | ||
|
|
||
| const result = meeting.anonymizeProposals(proposals); | ||
|
|
||
| expect(result[0].id).toBe('A'); | ||
| expect(result[1].id).toBe('B'); | ||
| expect(result[0].originalDirector).toBe('Dir 1'); | ||
| }); | ||
|
|
||
| test('tallyVotes should determine the winner', () => { | ||
| meeting = new BoardMeeting(); | ||
| const anonymizedProposals = [ | ||
| { id: 'A', originalDirector: 'Dir 1', content: 'Content 1' }, | ||
| { id: 'B', originalDirector: 'Dir 2', content: 'Content 2' } | ||
| ]; | ||
|
|
||
| const votes = [ | ||
| { voter: 'Voter 1', choice: 'A', reason: 'Good' }, | ||
| { voter: 'Voter 2', choice: 'A', reason: 'Better' }, | ||
| { voter: 'Voter 3', choice: 'B', reason: 'Okay' } | ||
| ]; | ||
|
|
||
| const result = meeting.tallyVotes(votes, anonymizedProposals); | ||
|
|
||
| expect(result.winner.id).toBe('A'); | ||
| expect(result.scores['A']).toBe(2); | ||
| expect(result.scores['B']).toBe(1); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
The test file is missing coverage for several important methods:
conductMeeting(the main orchestration method)collectVotes(handles voting logic and director exclusion)parseVote(parses vote responses from LLM)
Additionally, error handling scenarios are not tested (e.g., when LLM calls fail).
Consider adding tests for these methods to ensure the complete workflow is properly tested.
| // In ESM mocking with Jest, we might need to handle the mock differently | ||
| // if LLMClient is the default export. | ||
| // However, since we are mocking the module, we assume the constructor calls will rely on the mocked class. | ||
|
|
||
| // For this specific test, we can just spy on the methods or ensure the class is mocked correctly. | ||
| // Given the error "mockClear is not a function", it seems LLMClient isn't being treated as a Jest mock function automatically in this ESM context. | ||
|
|
||
| // Workaround: Mock the instance method on the prototype if strict mocking fails | ||
| // But better: Use the mocked module instance. |
There was a problem hiding this comment.
These lengthy explanatory comments (lines 18-26) about ESM mocking challenges don't add value to the test file and make it harder to read. Since the tests use manual mock injection (meeting.llm = mockLLM), these comments are unnecessary.
Consider removing these comments or replacing them with a brief note if needed.
| // In ESM mocking with Jest, we might need to handle the mock differently | |
| // if LLMClient is the default export. | |
| // However, since we are mocking the module, we assume the constructor calls will rely on the mocked class. | |
| // For this specific test, we can just spy on the methods or ensure the class is mocked correctly. | |
| // Given the error "mockClear is not a function", it seems LLMClient isn't being treated as a Jest mock function automatically in this ESM context. | |
| // Workaround: Mock the instance method on the prototype if strict mocking fails | |
| // But better: Use the mocked module instance. | |
| // Manual mock injection is used due to ESM/Jest mocking limitations. |
| // Find winner | ||
| let maxScore = -1; | ||
| let winnerId = null; | ||
|
|
||
| Object.entries(scores).forEach(([id, score]) => { | ||
| if (score > maxScore) { | ||
| maxScore = score; | ||
| winnerId = id; | ||
| } | ||
| }); | ||
|
|
||
| const winner = anonymizedProposals.find(p => p.id === winnerId); | ||
|
|
||
| return { | ||
| topic: "", | ||
| winner: winner, | ||
| scores: scores, | ||
| votes: validVotes, | ||
| proposals: anonymizedProposals |
There was a problem hiding this comment.
The winner determination logic doesn't handle ties properly. When multiple proposals have the same maximum score, the first one encountered in the iteration order will always win. This could lead to unexpected results in tie situations.
Consider either:
- Detecting and reporting ties explicitly
- Implementing a tie-breaking mechanism (e.g., random selection, reporting all tied proposals)
| // Find winner | |
| let maxScore = -1; | |
| let winnerId = null; | |
| Object.entries(scores).forEach(([id, score]) => { | |
| if (score > maxScore) { | |
| maxScore = score; | |
| winnerId = id; | |
| } | |
| }); | |
| const winner = anonymizedProposals.find(p => p.id === winnerId); | |
| return { | |
| topic: "", | |
| winner: winner, | |
| scores: scores, | |
| votes: validVotes, | |
| proposals: anonymizedProposals | |
| // Find winners (handle ties) | |
| let maxScore = -1; | |
| Object.values(scores).forEach(score => { | |
| if (score > maxScore) { | |
| maxScore = score; | |
| } | |
| }); | |
| // Collect all proposal IDs with maxScore | |
| const winnerIds = Object.entries(scores) | |
| .filter(([id, score]) => score === maxScore) | |
| .map(([id, score]) => id); | |
| const winners = anonymizedProposals.filter(p => winnerIds.includes(p.id)); | |
| return { | |
| topic: "", | |
| winners: winners, | |
| scores: scores, | |
| votes: validVotes, | |
| proposals: anonymizedProposals, | |
| tie: winners.length > 1 |
| `; | ||
| console.log(` - ${director.name} is voting...`); | ||
| const response = await this.llm.complete(director.systemPrompt, prompt); | ||
| return this.parseVote(response, director.name); |
There was a problem hiding this comment.
The code instructs directors not to vote for their own proposal (line 79), but doesn't enforce this constraint. A director could still vote for their own proposal if the LLM doesn't follow instructions properly.
Consider adding validation after parsing the vote:
const vote = this.parseVote(response, director.name);
if (vote.choice === ownId) {
console.warn(`${director.name} attempted to vote for their own proposal. Invalidating vote.`);
return null;
}
return vote;| return this.parseVote(response, director.name); | |
| const vote = this.parseVote(response, director.name); | |
| if (vote.choice === ownId) { | |
| console.warn(`${director.name} attempted to vote for their own proposal. Invalidating vote.`); | |
| return null; | |
| } | |
| return vote; |
|
|
||
| anonymizeProposals(proposals) { | ||
| return proposals.map((p, index) => ({ | ||
| id: String.fromCharCode(65 + index), // A, B, C, D, E |
There was a problem hiding this comment.
The ID generation using String.fromCharCode(65 + index) assumes there will always be 5 or fewer proposals. If the system is extended to support more directors in the future, this could produce unexpected results (e.g., '[', '\', etc. after 'Z').
Consider adding a check or using a more scalable ID generation approach:
if (index > 25) {
throw new Error('Too many proposals for single-letter IDs');
}Or use a different scheme like: 'P1', 'P2', etc.
| id: String.fromCharCode(65 + index), // A, B, C, D, E | |
| id: `P${index + 1}`, // P1, P2, P3, ... |
| import LLMClient from '../../lib/consensus/llm-client.js'; | ||
|
|
There was a problem hiding this comment.
Unused import LLMClient.
| import LLMClient from '../../lib/consensus/llm-client.js'; |
This PR introduces a "Board of Directors" feature that allows users to get a consensus on a topic from 5 different AI personas. The agents generate proposals, which are then anonymized and voted on by the group to determine the best approach. This feature is accessible via the
bin/coding boardcommand.PR created automatically by Jules for task 6859861011012982519 started by @davidraehles