Commit Graph

129 Commits

Author SHA1 Message Date
shokollm
4c48932ece fix: support inline formatting in table cells (bold, italic, code, links) 2026-04-10 10:15:21 +00:00
shokollm
bfc85648db fix: improve markdown parser for tables, headings, and line breaks 2026-04-10 10:09:46 +00:00
shokollm
925920eee1 fix: add typing indicator back when waiting for response 2026-04-10 10:05:50 +00:00
shokollm
299e74cffa chore: hide ProUpgradeBanner (not implementing pro yet) 2026-04-10 09:59:08 +00:00
shokollm
2b875cfa27 feat: show thinking above response with expand/collapse, first line preview 2026-04-10 09:56:21 +00:00
shokollm
ae612ad725 fix: use requests instead of OpenAI client for thinking endpoint 2026-04-10 09:50:36 +00:00
shokollm
08912019c2 feat: use MiniMax extended thinking endpoint for native reasoning 2026-04-10 09:47:09 +00:00
shokollm
44453877b3 feat: use direct LLM with structured JSON for thinking/response separation 2026-04-10 09:31:07 +00:00
shokollm
ad4a1e89d5 fix: revert to kickoff (stream not available on Agent) 2026-04-10 09:23:46 +00:00
shokollm
57fa200ba9 feat: add thinking content to chat response 2026-04-10 09:16:08 +00:00
shokollm
db4fb83243 feat: add markdown rendering and thinking state UI to chat 2026-04-10 09:01:16 +00:00
shokollm
560b61c431 fix: increase timeout for long-running AI chat requests 2026-04-10 08:51:03 +00:00
shokollm
c6baadf8b8 fix: use JSON body for login instead of form data 2026-04-10 08:09:42 +00:00
shokollm
937cc2da60 fix: send username instead of email for login API 2026-04-10 07:47:21 +00:00
32cd7184ea Merge pull request 'feat: implement conversational AI agent with tool-calling' (#52) from feat/conversational-agent-with-tools into main 2026-04-10 09:39:45 +02:00
shokollm
765e390b9b feat: implement conversational AI agent with tool-calling
Prototype implementation that allows:
1. Normal conversation with the AI
2. Tool-calling to update trading strategies

Created new ConversationalAgent that uses CrewAI with tools:
- get_current_strategy: Check current bot strategy
- update_trading_strategy: Update bot's trading configuration

The agent can now respond to questions like 'What is this?' without
forcing JSON output, and can update strategies when user provides
specific parameters.

Refs #51
2026-04-10 05:00:22 +00:00
21ce282cae Merge pull request 'fix: add fallback UUID generator for crypto.randomUUID compatibility' (#50) from fix/crypto-randomuuid-fallback into main 2026-04-10 06:26:49 +02:00
shokollm
4fa9b0456a fix: add fallback UUID generator for crypto.randomUUID compatibility
crypto.randomUUID() is not available in all environments (e.g., older browsers,
non-secure contexts). Added a fallback UUID v4 implementation.
2026-04-10 04:19:45 +00:00
af9900d0ba Merge pull request 'fix: add timeout for chat requests and improve error handling' (#49) from fix/chat-timeout-handling into main 2026-04-10 06:15:45 +02:00
shokollm
b3ab004447 fix: add timeout for chat requests and improve error handling
Changes:
1. Add 30-second timeout for chat API requests using AbortController
2. User's message now shows immediately before API response (already done in previous PR)
3. Differentiate between timeout errors and other errors in error messages
4. API client now accepts optional signal parameter for abort support
2026-04-10 04:09:30 +00:00
d394bc0857 Merge pull request 'fix: display user messages in chat interface' (#48) from fix/display-user-messages into main 2026-04-10 06:04:23 +02:00
shokollm
dfa806ab53 fix: add user's message to frontend chat store when sending
Previously, only the assistant's response was added to the frontend store.
Now both user and assistant messages are stored, so the conversation
displays correctly in the chat interface.
2026-04-10 04:00:51 +00:00
3493775b7f Merge pull request 'fix: update MiniMaxConnector default model to MiniMax-M2.7' (#47) from fix/minimax-connector-model into main 2026-04-10 06:00:36 +02:00
shokollm
82645dfb3b fix: update MiniMaxConnector default model to MiniMax-M2.7 2026-04-10 03:53:26 +00:00
c17fa243a1 Merge pull request 'fix: use MiniMax text/chatcompletion_v2 endpoint' (#46) from fix/minimax-endpoint-v2 into main 2026-04-10 05:44:25 +02:00
shokollm
a55ed9cc04 fix: use MiniMax text/chatcompletion_v2 endpoint instead of chat/completions
The /v1/chat/completions endpoint returns 529 (overloaded) while
/v1/text/chatcompletion_v2 works reliably.
2026-04-10 03:42:20 +00:00
d1408b74b4 Merge pull request 'fix: properly configure MiniMax API endpoint and CrewAI LLM' (#45) from fix/minimax-api-endpoint-v2 into main 2026-04-10 05:31:13 +02:00
shokollm
4197475eed fix: properly configure CrewAI LLM with MiniMax api_base
- Use CrewAI's LLM class directly with api_base parameter instead of custom subclass
- Remove broken MiniMaxLLM inheritance from LLM
- Update agent creation to use LLM(model, api_key, api_base) pattern

The issue was that inheriting from CrewAI's LLM class caused the api_base
to be set to None. Now we use CrewAI's LLM directly with the correct parameters.

Fixes #43
2026-04-10 03:19:51 +00:00
87bac8894a Merge pull request 'fix: update MiniMax API endpoint to api.minimax.io' (#44) from fix/minimax-api-endpoint into main 2026-04-10 05:10:17 +02:00
shokollm
bef4479675 fix: update MiniMax API endpoint and default model
Changes:
1. Updated API endpoint from api.minimax.chat to api.minimax.io
2. Changed default model from MiniMax-Text-01 to MiniMax-M2.7
   (MiniMax-Text-01 is not available for all API key plans)
3. Updated .env.example with correct default model

MiniMax API docs: https://platform.minimax.io/docs/api-reference/text-anthropic-api

Fixes #43
2026-04-10 03:07:02 +00:00
75970c57e3 Merge pull request 'feat: return access token on user registration' (#42) from feat/41-return-token-on-register into main 2026-04-10 03:31:15 +02:00
shokollm
f23044465a feat: return access token on user registration
After successful registration, the backend now returns an access token
(along with token_type) so the frontend can:
- Store the token in localStorage
- Fetch the user profile
- Redirect to dashboard

Fixes #41
2026-04-10 01:28:01 +00:00
a6e4d28aa7 Merge pull request 'fix: add bcrypt version constraint for passlib compatibility' (#40) from fix/bcrypt-compatibility into main 2026-04-10 02:55:36 +02:00
shokollm
8693946cb8 fix: add bcrypt version constraint for passlib compatibility
bcrypt 5.0.0 is incompatible with passlib 1.7.x - passlib tries to
access bcrypt.__about__.__version__ which was removed in bcrypt 5.x.

Constrain bcrypt to >=4.0,<5.0 to maintain compatibility.
2026-04-10 00:55:18 +00:00
a2f549c056 Merge pull request 'fix: correct import paths in ai_agent module' (#39) from fix/ai-agent-imports into main 2026-04-09 17:32:21 +02:00
shokollm
ad6e57655d fix: correct import paths in ai_agent module
- Fix relative import path in crew.py (from ..core to ...core)
- Update __init__.py exports to match actual class names
- Remove incorrect CrewAgent and LLMConnector exports
2026-04-09 15:27:09 +00:00
ac5e9d8b81 Merge pull request 'fix: add error logging to simulate engine to prevent silent failures' (#38) from fix/issue-30 into main 2026-04-09 12:19:36 +02:00
shokollm
81f3342365 fix: add error logging to simulate engine to prevent silent failures
Errors during price fetching are now logged and stored in an errors list,
allowing users to see error count/warnings in simulation results.

Acceptance Criteria:
- [x] Errors are logged (not silently swallowed)
- [x] User can see error count/warnings in simulation results
- [x] Simulation completes even if some price fetches fail (graceful degradation)
2026-04-09 10:16:22 +00:00
6adad0701d Merge pull request 'fix: consolidate AveCloudClient to single implementation' (#37) from fix/issue-29 into main 2026-04-09 12:11:59 +02:00
shokollm
405b35c3ba fix: consolidate AveCloudClient to single implementation in services/ave/client.py 2026-04-09 10:06:16 +00:00
dd25d38e7e Merge pull request 'feat: implement stop-loss and take-profit risk management' (#36) from fix/issue-28 into main 2026-04-09 11:39:50 +02:00
shokollm
da8327c0e0 feat: implement stop-loss and take-profit in backtest and simulate engines 2026-04-09 09:14:08 +00:00
8d33ea9a44 Merge pull request 'fix: flatten strategy config schema (backtesting broken)' (#35) from fix/issue-25 into main 2026-04-09 09:32:49 +02:00
shokollm
d81464b869 fix: flatten strategy config schema to match engine expectations
LLM was outputting nested params structure but engines expect flat fields.
This caused backtesting and simulation to never trigger any trades.

Changes:
- llm_connector.py: Update prompt to output flat condition structure
- crew.py: Update StrategyValidator to validate flat structure
- crew.py: Update StrategyExplainer to read flat structure

Fixes #25
2026-04-09 07:31:09 +00:00
55b008d4e8 Merge pull request 'fix: validate chain is 'bsc' for Phase 1' (#34) from fix/issue-31 into main 2026-04-09 09:10:55 +02:00
shokollm
04e4c1a487 fix: validate chain is 'bsc' for BacktestCreate and SimulationCreate 2026-04-09 06:58:16 +00:00
feb65131fa Merge pull request 'fix: populate config endpoints with chain and token data' (#33) from fix/issue-27 into main 2026-04-09 08:23:43 +02:00
shokollm
50af4e0722 fix: reduce tokens limit to 20 per review 2026-04-09 06:18:31 +00:00
shokollm
786e964e32 fix: return bsc chain and tokens from AVE API in config endpoints 2026-04-09 06:02:05 +00:00
41b699f9ee Merge pull request 'fix: make strategy_config and llm_config optional in BotCreate' (#32) from fix/issue-26 into main 2026-04-09 07:54:20 +02:00