Compare commits
12 Commits
hermes/her
...
fix/issue-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c348d6daa1 | ||
|
|
764c75e712 | ||
|
|
3a9f8fb365 | ||
|
|
a7837cec0f | ||
|
|
8cde441996 | ||
| b2180a4a34 | |||
|
|
d0534aedbf | ||
| 2703b942c1 | |||
|
|
f9c4bac7b8 | ||
|
|
c49600cd4d | ||
|
|
3a988943b9 | ||
|
|
da367c594b |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -2,3 +2,4 @@ __pycache__/
|
|||||||
*.pyc
|
*.pyc
|
||||||
*.pyo
|
*.pyo
|
||||||
.DS_Store
|
.DS_Store
|
||||||
|
.worktrees/
|
||||||
|
|||||||
778
docs/polymarket-browse/2026-03-25.md
Normal file
778
docs/polymarket-browse/2026-03-25.md
Normal file
@@ -0,0 +1,778 @@
|
|||||||
|
# Polymarket-Browse Skill Review
|
||||||
|
|
||||||
|
**Date:** 2026-03-25
|
||||||
|
**Reviewer:** Hermes Agent (Shoko)
|
||||||
|
**Version Reviewed:** Current HEAD
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Current State of SKILL.md
|
||||||
|
|
||||||
|
### 1.1 Overview
|
||||||
|
The SKILL.md is well-structured with clear sections:
|
||||||
|
- Installation instructions (Hermes Agent + OpenClaw)
|
||||||
|
- Usage with argument reference
|
||||||
|
- Output format examples
|
||||||
|
- Game categories table
|
||||||
|
- Filters explanation
|
||||||
|
- Pagination and rate limiting notes
|
||||||
|
- Odds format documentation
|
||||||
|
|
||||||
|
### 1.2 Strengths
|
||||||
|
- Clear argument documentation with defaults
|
||||||
|
- Good output format examples showing both match and non-match markets
|
||||||
|
- Filters section is detailed and explains tradeable vs non-tradeable logic
|
||||||
|
- Game categories table is easy to reference
|
||||||
|
- Rate limiting and backoff strategy documented
|
||||||
|
|
||||||
|
### 1.3 Issues/Gaps in SKILL.md
|
||||||
|
|
||||||
|
| Issue | Severity | Notes |
|
||||||
|
|-------|----------|-------|
|
||||||
|
| No troubleshooting section | Low | API errors, partial fetches, common issues not documented |
|
||||||
|
| No examples for --search | Low | Only mentioned in passing, no concrete example |
|
||||||
|
| No mention of required dependencies | Low | Assumes curl is available (it is on Linux) |
|
||||||
|
| No changelog | Low | Hard to track what changed between versions |
|
||||||
|
| Telegram section minimal | Low | Doesn't explain HTML parse_mode limitations |
|
||||||
|
| No credits/author info | Low | Who built this? |
|
||||||
|
|
||||||
|
### 1.4 Recommendations for SKILL.md
|
||||||
|
|
||||||
|
1. **Add troubleshooting section:**
|
||||||
|
- Partial fetch warnings (API errors/timeout)
|
||||||
|
- What to do if no markets appear
|
||||||
|
- Explaining why some matches disappear after they start
|
||||||
|
|
||||||
|
2. **Add concrete usage examples:**
|
||||||
|
```bash
|
||||||
|
# Example: Find FlyQuest Counter-Strike matches
|
||||||
|
polymarket-browse --category "Counter Strike" --search "FlyQuest"
|
||||||
|
|
||||||
|
# Example: Get 10 matches, no tournament futures
|
||||||
|
polymarket-browse --category "Valorant" --matches 10 --non-matches-only
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Add HTML escape notes for Telegram:**
|
||||||
|
- `<` `>` `&` `>` `<` need to be escaped in Telegram messages
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Current State of browse.py
|
||||||
|
|
||||||
|
### 2.1 Code Organization
|
||||||
|
|
||||||
|
The script is organized into logical sections with clear headers:
|
||||||
|
|
||||||
|
```
|
||||||
|
CONFIG
|
||||||
|
FETCH
|
||||||
|
FILTERS
|
||||||
|
FORMATTING
|
||||||
|
BROWSE
|
||||||
|
FORMAT
|
||||||
|
DISPLAY
|
||||||
|
TELEGRAM
|
||||||
|
MAIN
|
||||||
|
```
|
||||||
|
|
||||||
|
**Issues:**
|
||||||
|
- Lines are excessively long (erowse ~750 lines, some functions are very dense)
|
||||||
|
- `print_browse()` function is ~120 lines — too long to review mentally
|
||||||
|
- `send_to_telegram()` function is ~100 lines — also too long
|
||||||
|
- `format_detail_event()` has deeply nested list comprehensions
|
||||||
|
- No type hints anywhere
|
||||||
|
- No docstrings on main functions (only on helper functions)
|
||||||
|
|
||||||
|
### 2.2 Line Length Issues (CRITICAL)
|
||||||
|
|
||||||
|
The user specifically asked about this. Here are the longest lines:
|
||||||
|
|
||||||
|
| Line | Length | Issue |
|
||||||
|
|------|--------|-------|
|
||||||
|
| ~line 100 | ~180 chars | `fetch_page()` URL construction |
|
||||||
|
| ~line 160 | ~160 chars | `fetch_all_pages()` loop |
|
||||||
|
| ~line 210 | ~200 chars | `is_tradeable_event()` ML market checks |
|
||||||
|
| ~line 240 | ~180 chars | `is_tradeable_event()` datetime parsing |
|
||||||
|
| ~line 300 | ~180 chars | `get_match_time_status()` datetime math |
|
||||||
|
| ~line 380 | ~200 chars | `format_detail_event()` list comprehension |
|
||||||
|
| ~line 470 | ~220 chars | `print_browse()` event formatting |
|
||||||
|
| ~line 540 | ~180 chars | `send_to_telegram()` message building |
|
||||||
|
|
||||||
|
**Root cause:** The code was written for functionality, not readability. String concatenation and nested conditionals make lines very long.
|
||||||
|
|
||||||
|
### 2.3 Proposed Solutions for Line Length
|
||||||
|
|
||||||
|
**Option A: Refactor to shorter lines (Recommended)**
|
||||||
|
- Break long URL constructions into multiple lines
|
||||||
|
- Extract nested conditionals into helper variables
|
||||||
|
- Use intermediate variables for complex expressions
|
||||||
|
- Target: max 120 characters per line
|
||||||
|
|
||||||
|
**Option B: Add unit tests**
|
||||||
|
- Write unit tests that verify behavior without needing to read every line
|
||||||
|
- Tests serve as executable documentation
|
||||||
|
- Anyone can run `pytest` to verify correctness
|
||||||
|
- See Section 3 for details
|
||||||
|
|
||||||
|
**Option C: Both (Recommended)**
|
||||||
|
- Refactor for readability
|
||||||
|
- Add unit tests for critical paths
|
||||||
|
- This is the best approach
|
||||||
|
|
||||||
|
### 2.4 Function-by-Function Analysis
|
||||||
|
|
||||||
|
#### `fetch_page()` (~35 lines)
|
||||||
|
**What it does:** Fetches one page from Polymarket API with retry logic
|
||||||
|
**Issues:**
|
||||||
|
- URL construction is on one long line
|
||||||
|
- Exponential backoff is clear but verbose
|
||||||
|
- Could use `requests` library instead of curl subprocess
|
||||||
|
|
||||||
|
**Suggestions:**
|
||||||
|
- Break URL construction into multiple `params = {...}` style
|
||||||
|
- Consider using `httpx` or `requests` instead of curl subprocess
|
||||||
|
|
||||||
|
#### `fetch_all_pages()` (~25 lines)
|
||||||
|
**What it does:** Paginates through all results
|
||||||
|
**Issues:**
|
||||||
|
- `time.sleep(0.2)` is hardcoded — should be configurable
|
||||||
|
- No progress indicator for large fetches
|
||||||
|
|
||||||
|
**Suggestions:**
|
||||||
|
- Add progress callback option
|
||||||
|
- Make inter-page delay configurable
|
||||||
|
|
||||||
|
#### `is_tradeable_event()` (~70 lines)
|
||||||
|
**What it does:** Complex filter for tradeable match markets
|
||||||
|
**Issues:**
|
||||||
|
- This is the longest function at ~70 lines
|
||||||
|
- Multiple filter conditions stacked vertically (good) but with long lines (bad)
|
||||||
|
- Bare `except:` clauses that catch everything
|
||||||
|
|
||||||
|
**Suggestions:**
|
||||||
|
- Extract `is_bo2_tie()` check (already done — good)
|
||||||
|
- Extract datetime comparisons into helper functions
|
||||||
|
- Add early returns to reduce nesting
|
||||||
|
- Change bare `except:` to specific exceptions
|
||||||
|
|
||||||
|
#### `is_tradeable_market()` (~20 lines)
|
||||||
|
**What it does:** Filter for individual markets
|
||||||
|
**Issues:**
|
||||||
|
- Very similar to `is_tradeable_event()` — code duplication
|
||||||
|
- Could reuse logic from the event version
|
||||||
|
|
||||||
|
**Suggestions:**
|
||||||
|
- Consider unifying with `is_tradeable_event()`
|
||||||
|
|
||||||
|
#### `get_match_time_status()` / `get_match_time_str()` (~50 lines combined)
|
||||||
|
**What it does:** Time formatting for display
|
||||||
|
**Issues:**
|
||||||
|
- Duplicate logic — both functions do similar things
|
||||||
|
- WIB (UTC+7) is hardcoded — user is Indonesian, but this should be configurable
|
||||||
|
|
||||||
|
**Suggestions:**
|
||||||
|
- Consolidate into one function that returns both values
|
||||||
|
- Make timezone configurable
|
||||||
|
|
||||||
|
#### `print_browse()` (~120 lines)
|
||||||
|
**What it does:** Main display function for CLI output
|
||||||
|
**Issues:**
|
||||||
|
- ~120 lines is too long to review mentally
|
||||||
|
- Mixes display logic with data formatting
|
||||||
|
- Has its own datetime import (Python import inside function — anti-pattern)
|
||||||
|
|
||||||
|
**Suggestions:**
|
||||||
|
- Break into smaller functions:
|
||||||
|
- `format_match_line()`
|
||||||
|
- `format_non_match_line()`
|
||||||
|
- `print_match_section()`
|
||||||
|
- `print_non_match_section()`
|
||||||
|
|
||||||
|
#### `send_to_telegram()` (~100 lines)
|
||||||
|
**What it does:** Telegram integration
|
||||||
|
**Issues:**
|
||||||
|
- ~100 lines too long
|
||||||
|
- Complex chunking logic for Telegram 4096 char limit
|
||||||
|
- HTML escaping not handled
|
||||||
|
|
||||||
|
**Suggestions:**
|
||||||
|
- Extract chunking logic into separate function
|
||||||
|
- Add HTML escaping helper
|
||||||
|
- Consider using `python-telegram-bot` library instead of curl
|
||||||
|
|
||||||
|
#### `format_detail_event()` (~30 lines)
|
||||||
|
**What it does:** Formats event with all markets for detail view
|
||||||
|
**Issues:**
|
||||||
|
- List comprehension is deeply nested and hard to read
|
||||||
|
- ~15-line dict construction
|
||||||
|
|
||||||
|
**Suggestions:**
|
||||||
|
- Break the dict construction into multiple lines
|
||||||
|
- Extract market formatting into helper
|
||||||
|
|
||||||
|
### 2.5 Error Handling
|
||||||
|
|
||||||
|
| Issue | Severity | Notes |
|
||||||
|
|-------|----------|-------|
|
||||||
|
| Bare `except:` clauses | Medium | Catches KeyboardInterrupt, SystemExit |
|
||||||
|
| No logging | Low | Uses print statements |
|
||||||
|
| No structured errors | Low | Could benefit from custom exceptions |
|
||||||
|
|
||||||
|
### 2.6 Missing Features/Bugs
|
||||||
|
|
||||||
|
| Issue | Severity | Notes |
|
||||||
|
|-------|----------|-------|
|
||||||
|
| No test suite | High | Cannot verify correctness automatically |
|
||||||
|
| WIB hardcoded | Medium | Timezone should be configurable |
|
||||||
|
| No cache option | Low | Could cache results for repeated queries |
|
||||||
|
| `--detail` uses 1-indexed but docs unclear | Low | Works, but could be confusing |
|
||||||
|
| BO2 tie detection uses title match | Medium | Relies on "BO2" in title — fragile |
|
||||||
|
| `is_bo2_tie()` checks child_moneyline closed | Medium | API may not always set this flag |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Recommended Improvements
|
||||||
|
|
||||||
|
### 3.1 Code Refactoring (Priority: HIGH)
|
||||||
|
|
||||||
|
**Goal:** Make browse.py reviewable by humans
|
||||||
|
|
||||||
|
**Specific changes:**
|
||||||
|
|
||||||
|
1. **Break `print_browse()` into helper functions:**
|
||||||
|
```python
|
||||||
|
def format_match_line(i, e, ml, outcomes, prices, vol, title, url, ...):
|
||||||
|
"""Format a single match event line."""
|
||||||
|
...
|
||||||
|
|
||||||
|
def print_match_section(match_events, ...):
|
||||||
|
"""Print the MATCH MARKETS section."""
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Break `send_to_telegram()` into helper functions:**
|
||||||
|
```python
|
||||||
|
def escape_html(text):
|
||||||
|
"""Escape HTML special characters."""
|
||||||
|
...
|
||||||
|
|
||||||
|
def chunk_telegram_message(lines, max_len=4096):
|
||||||
|
"""Split long messages into chunks."""
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Break long lines:**
|
||||||
|
- URL construction: use `params = {...}` dict style
|
||||||
|
- Long conditionals: extract to named variables
|
||||||
|
- Long f-strings: break across multiple lines
|
||||||
|
|
||||||
|
4. **Add type hints:**
|
||||||
|
```python
|
||||||
|
def fetch_page(q: str, page: int = 1, ...) -> Optional[dict]:
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Consolidate duplicate time functions:**
|
||||||
|
- `get_match_time_status()` and `get_match_time_str()` share logic
|
||||||
|
- Create one function returning both
|
||||||
|
|
||||||
|
### 3.2 Unit Tests (Priority: HIGH)
|
||||||
|
|
||||||
|
**Goal:** Enable human review via test execution, not line-by-line reading
|
||||||
|
|
||||||
|
**Proposed test structure:**
|
||||||
|
```
|
||||||
|
tests/
|
||||||
|
__init__.py
|
||||||
|
test_filters.py # is_match_market, is_tradeable_event, is_tradeable_market
|
||||||
|
test_formatters.py # format_odds, prob_to_cents, get_match_time_*
|
||||||
|
test_browse.py # Integration tests with mocked API
|
||||||
|
test_cli.py # Argument parsing tests
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test examples:**
|
||||||
|
|
||||||
|
```python
|
||||||
|
# test_formatters.py
|
||||||
|
def test_prob_to_cents():
|
||||||
|
assert prob_to_cents(0.30) == 30
|
||||||
|
assert prob_to_cents(0.95) == 95
|
||||||
|
assert prob_to_cents(0.001) == 0
|
||||||
|
|
||||||
|
def test_format_odds():
|
||||||
|
assert format_odds(0.30) == "30c"
|
||||||
|
assert format_odds(0.95) == "95c"
|
||||||
|
|
||||||
|
# test_filters.py
|
||||||
|
def test_is_match_market_with_series():
|
||||||
|
e = {"seriesSlug": "csg", "gameId": "123", "title": "Team A vs Team B"}
|
||||||
|
assert is_match_market(e) == True
|
||||||
|
|
||||||
|
def test_is_match_market_vs_syntax():
|
||||||
|
e = {"title": "Team A vs Team B"}
|
||||||
|
assert is_match_market(e) == True
|
||||||
|
|
||||||
|
def test_is_match_market_non_match():
|
||||||
|
e = {"title": "Tournament Winner"}
|
||||||
|
assert is_match_market(e) == False
|
||||||
|
|
||||||
|
# test_filters.py - is_tradeable_event
|
||||||
|
def test_bo2_tie_filter():
|
||||||
|
"""BO2 matches ending 1-1 should be filtered out."""
|
||||||
|
e = create_bo2_event(ended_tie=True)
|
||||||
|
assert is_tradeable_event(e) == False
|
||||||
|
|
||||||
|
def test_converged_market_filter():
|
||||||
|
"""Market with bestBid >= 0.99 should be filtered."""
|
||||||
|
e = create_event_with_ml(bestBid=0.99, bestAsk=0.99)
|
||||||
|
assert is_tradeable_event(e) == False
|
||||||
|
```
|
||||||
|
|
||||||
|
**Mock API responses needed:**
|
||||||
|
- Store sample API responses in `tests/fixtures/` as JSON
|
||||||
|
- Use `responses` or `requests-mock` to mock HTTP calls
|
||||||
|
|
||||||
|
### 3.3 Documentation Improvements (Priority: MEDIUM)
|
||||||
|
|
||||||
|
1. Add troubleshooting section to SKILL.md
|
||||||
|
2. Add concrete usage examples
|
||||||
|
3. Add HTML escape notes for Telegram
|
||||||
|
4. Add changelog
|
||||||
|
5. Document the 1-indexed `--detail` argument more clearly
|
||||||
|
|
||||||
|
### 3.4 Configuration Options (Priority: LOW)
|
||||||
|
|
||||||
|
1. Make timezone (WIB) configurable via `--timezone` argument or env var
|
||||||
|
2. Make inter-page delay configurable
|
||||||
|
3. Add `--json` output option for programmatic use
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Summary Table
|
||||||
|
|
||||||
|
| Category | Item | Priority | Effort |
|
||||||
|
|----------|------|----------|--------|
|
||||||
|
| **Code** | Refactor print_browse() into smaller functions | HIGH | Medium |
|
||||||
|
| **Code** | Refactor send_to_telegram() into smaller functions | HIGH | Medium |
|
||||||
|
| **Code** | Break long lines to max 120 chars | HIGH | Low |
|
||||||
|
| **Tests** | Add unit tests for filters | HIGH | Medium |
|
||||||
|
| **Tests** | Add unit tests for formatters | HIGH | Low |
|
||||||
|
| **Tests** | Add integration tests with mocked API | MEDIUM | Medium |
|
||||||
|
| **Docs** | Add troubleshooting section to SKILL.md | MEDIUM | Low |
|
||||||
|
| **Docs** | Add usage examples to SKILL.md | MEDIUM | Low |
|
||||||
|
| **Code** | Consolidate duplicate time functions | LOW | Low |
|
||||||
|
| **Code** | Add type hints | LOW | Medium |
|
||||||
|
| **Config** | Make timezone configurable | LOW | Low |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Next Steps
|
||||||
|
|
||||||
|
1. **Immediate:** Create unit test structure under `tests/`
|
||||||
|
2. **Short-term:** Refactor `print_browse()` and `send_to_telegram()` into smaller functions
|
||||||
|
3. **Short-term:** Break long lines to max 120 characters
|
||||||
|
4. **Medium-term:** Add comprehensive unit tests
|
||||||
|
5. **Medium-term:** Update SKILL.md with troubleshooting and examples
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Security Audit
|
||||||
|
|
||||||
|
### 6.1 Severity Classification
|
||||||
|
|
||||||
|
| Severity | Meaning |
|
||||||
|
|----------|---------|
|
||||||
|
| CRITICAL | Immediate action required. Users should stop using the skill until fixed. |
|
||||||
|
| HIGH | Serious vulnerability. Fix as soon as possible. |
|
||||||
|
| MEDIUM | Moderate issue. Fix in normal development cycle. |
|
||||||
|
| LOW | Minor issue. Fix when convenient. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6.2 CRITICAL: Telegram Bot Token Exposed in Process Listings
|
||||||
|
|
||||||
|
**Location:** Lines 598-606 (`send_to_telegram()`)
|
||||||
|
|
||||||
|
**Description:**
|
||||||
|
The Telegram bot token is passed as a subprocess argument to `curl`, making it visible in the process command line. On any Unix system, any user can view all running processes' command lines via `ps aux` or `/proc/*/cmdline`.
|
||||||
|
|
||||||
|
**Proof of Concept:**
|
||||||
|
```bash
|
||||||
|
# While the script is running (or just after it finishes):
|
||||||
|
ps aux | grep curl
|
||||||
|
|
||||||
|
# Output reveals token:
|
||||||
|
# curl -s https://api.telegram.org/bot123456789:ABCdefGHI/sendMessage -d chat_id=... -d text=...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Impact:**
|
||||||
|
- Any user on a shared system can steal the bot token
|
||||||
|
- Attacker can send arbitrary messages from the bot
|
||||||
|
- Attacker can use the bot for spam, phishing, or social engineering
|
||||||
|
- If the bot is in group chats, attacker can extract member information
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
Use curl with `--oauth2-bearer` or environment variable approach. The bot token should NEVER appear in the command line. Recommended fix:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import os
|
||||||
|
|
||||||
|
def send_to_telegram(...):
|
||||||
|
bot_token = os.environ.get("BOT_TOKEN")
|
||||||
|
chat_id = os.environ.get("CHAT_ID")
|
||||||
|
if not bot_token or not chat_id:
|
||||||
|
return
|
||||||
|
|
||||||
|
env = os.environ.copy()
|
||||||
|
# Use a temp file or proper curl auth method
|
||||||
|
# Actually Telegram bot tokens cannot be passed via header
|
||||||
|
# Instead: use Python's requests library which handles this securely
|
||||||
|
# OR: write token to a file with 0600 perms and use curl's --config option
|
||||||
|
```
|
||||||
|
|
||||||
|
**Proper fix using Python `requests` library:**
|
||||||
|
```python
|
||||||
|
import requests
|
||||||
|
|
||||||
|
def send_to_telegram(...):
|
||||||
|
bot_token = os.environ.get("BOT_TOKEN")
|
||||||
|
chat_id = os.environ.get("CHAT_ID")
|
||||||
|
if not bot_token or not chat_id:
|
||||||
|
return
|
||||||
|
|
||||||
|
url = f"https://api.telegram.org/bot{bot_token}/sendMessage"
|
||||||
|
payload = {"chat_id": chat_id, "text": text, "parse_mode": "HTML"}
|
||||||
|
# Token stays in memory, never in command line
|
||||||
|
resp = requests.post(url, data=payload, timeout=10)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Interim mitigation:** If you must use curl, write the token to a temporary file with mode 0600 and use curl's `--config` flag, then delete the file immediately after.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6.3 HIGH: HTML Injection in Telegram Messages (XSS-adjacent)
|
||||||
|
|
||||||
|
**Location:** Lines 614-661 (`send_to_telegram()`)
|
||||||
|
|
||||||
|
**Description:**
|
||||||
|
Event titles and other data from the Polymarket API are inserted directly into Telegram messages with `parse_mode=HTML`. No HTML escaping is performed. Characters like `<`, `>`, `&` are not escaped.
|
||||||
|
|
||||||
|
**Attack scenario:**
|
||||||
|
1. Polymarket (or an attacker who compromises Polymarket data) includes a malicious title like:
|
||||||
|
- `<script>alert('XSS')</script>`
|
||||||
|
- `<img src=x onerror=alert(1)>`
|
||||||
|
- `Team A & Team B` (ampersand not escaped renders as `&amp;` or breaks parsing)
|
||||||
|
2. The bot sends this to Telegram
|
||||||
|
3. Telegram renders the HTML, potentially executing JavaScript in the context of the Telegram web client
|
||||||
|
|
||||||
|
**Note:** Telegram's HTML parser is restrictive (`<b>`, `<i>`, `<a>`, etc. only), so classic XSS is limited. However:
|
||||||
|
- Malformed HTML can crash the Telegram client
|
||||||
|
- Attribute-based injections in `<a>` tags could be possible
|
||||||
|
- The `<` and `>` characters themselves are illegal in Telegram HTML and will cause parse errors
|
||||||
|
|
||||||
|
**Current vulnerable code (simplified):**
|
||||||
|
```python
|
||||||
|
title = e.get("title", "?")
|
||||||
|
lines.append(f"<b>{i}.</b> <a href=\"{url}\">{title_clean}</a>")
|
||||||
|
# title_clean is title.split(" - ")[0].strip() -- no HTML escaping
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```python
|
||||||
|
import html
|
||||||
|
|
||||||
|
def escape_html(text):
|
||||||
|
"""Escape HTML special characters for Telegram."""
|
||||||
|
return (text
|
||||||
|
.replace("&", "&")
|
||||||
|
.replace("<", "<")
|
||||||
|
.replace(">", ">")
|
||||||
|
.replace('"', """))
|
||||||
|
|
||||||
|
title = e.get("title", "?")
|
||||||
|
title_escaped = escape_html(title_clean)
|
||||||
|
lines.append(f"<b>{i}.</b> <a href=\"{url}\">{title_escaped}</a>")
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6.4 MEDIUM: Insufficient Input Sanitization on `--search` Parameter
|
||||||
|
|
||||||
|
**Location:** Line 39 (`fetch_page()`)
|
||||||
|
|
||||||
|
**Description:**
|
||||||
|
The `--search` argument is only sanitized with `.replace(' ', '%20')`. This only handles spaces. Other URL-sensitive characters (`#`, `?`, `&`, `%`, `+`, etc.) are not properly URL-encoded.
|
||||||
|
|
||||||
|
**Attack scenario:**
|
||||||
|
A user passes `--search "Team%20A"` expecting to search for "Team%20A" literally, but `%20` gets decoded to a space by the URL parser, searching for "Team A" instead.
|
||||||
|
|
||||||
|
More concerning: if the search term contains `#`, it could truncate the query:
|
||||||
|
```bash
|
||||||
|
--search "Team#A" -> URL becomes ?q=Team#A&limit=50...
|
||||||
|
# Everything after # is treated as URL fragment, not part of the query
|
||||||
|
```
|
||||||
|
|
||||||
|
**Current code:**
|
||||||
|
```python
|
||||||
|
url = (f"{base}?q={q.replace(' ', '%20')}&limit={PAGE_SIZE}&page={page}"
|
||||||
|
f"&search_profiles=false&search_tags=false"
|
||||||
|
f"&keep_closed_markets=0&events_status=active&cache=false")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
Use `urllib.parse.quote()` for proper URL encoding:
|
||||||
|
```python
|
||||||
|
from urllib.parse import quote
|
||||||
|
|
||||||
|
def fetch_page(q, page=1, ...):
|
||||||
|
base = "https://gamma-api.polymarket.com/public-search"
|
||||||
|
encoded_q = quote(q, safe='')
|
||||||
|
url = (f"{base}?q={encoded_q}&limit={PAGE_SIZE}&page={page}"
|
||||||
|
...)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6.5 MEDIUM: No Bounds Checking on `--detail` Argument
|
||||||
|
|
||||||
|
**Location:** Lines 778-785 (`main()`)
|
||||||
|
|
||||||
|
**Description:**
|
||||||
|
The `--detail N` argument is used to index into `result["match_events"]` without proper bounds checking. While there is a fallback (`idx = 0` if out of range), the logic silently defaults to index 0, which may not be what the user intended.
|
||||||
|
|
||||||
|
```python
|
||||||
|
idx = args.detail - 1 # User provides 1-indexed
|
||||||
|
if idx < 0 or idx >= len(result["match_events"]):
|
||||||
|
idx = 0 # Silently defaults to first event
|
||||||
|
detail_event = result["match_events"][idx]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Impact:**
|
||||||
|
- Low security impact, but confusing UX
|
||||||
|
- User expects event #100 but gets event #1 silently
|
||||||
|
|
||||||
|
**Fix:** Warn user if index is out of range instead of silently defaulting.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6.6 MEDIUM: Potential Denial of Service via Large API Response
|
||||||
|
|
||||||
|
**Location:** Lines 53, 79-81, 357-358, 393-403
|
||||||
|
|
||||||
|
**Description:**
|
||||||
|
The code loads the entire API response into memory via `json.loads()`. If a malicious or compromised Polymarket API returned an extremely large JSON payload (gigabytes), the script could exhaust system memory.
|
||||||
|
|
||||||
|
**Additional issue:** `fetch_all_pages()` loops up to 100 pages, each with up to 50 events. While bounded, there's no size limit on individual events or their nested `markets` arrays.
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```python
|
||||||
|
# Add response size limits
|
||||||
|
MAX_RESPONSE_SIZE = 10 * 1024 * 1024 # 10MB
|
||||||
|
if len(r.stdout) > MAX_RESPONSE_SIZE:
|
||||||
|
raise ValueError(f"API response too large: {len(r.stdout)} bytes")
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6.7 LOW: Bare `except:` Clauses Swallow Security-Relevant Errors
|
||||||
|
|
||||||
|
**Location:** Lines 169, 183, 269, 308, 456
|
||||||
|
|
||||||
|
**Description:**
|
||||||
|
Bare `except: pass` catches ALL exceptions including `KeyboardInterrupt`, `SystemExit`, `MemoryError`, and `OSError`. This silently hides errors that might indicate security problems (e.g., timeouts could suggest a DoS attack in progress).
|
||||||
|
|
||||||
|
**Current code:**
|
||||||
|
```python
|
||||||
|
try:
|
||||||
|
end_dt = datetime.fromisoformat(end_str.replace('Z', '+00:00'))
|
||||||
|
...
|
||||||
|
except:
|
||||||
|
pass # Silently ignores ALL errors
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```python
|
||||||
|
try:
|
||||||
|
end_dt = datetime.fromisoformat(end_str.replace('Z', '+00:00'))
|
||||||
|
...
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
pass # Only catch expected exceptions
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6.8 LOW: No Rate Limiting on API Calls (DoS vector)
|
||||||
|
|
||||||
|
**Location:** Lines 74-76 (`fetch_all_pages()`)
|
||||||
|
|
||||||
|
**Description:**
|
||||||
|
An attacker who can pass arguments to this script (e.g., via a web wrapper) could repeatedly call the Polymarket API in a loop, potentially:
|
||||||
|
1. Getting the user's IP rate-limited or banned by Polymarket
|
||||||
|
2. Consuming the user's bandwidth
|
||||||
|
3. Causing resource exhaustion on the host
|
||||||
|
|
||||||
|
**Fix:** Add a cooldown between runs if called repeatedly, or use a token bucket rate limiter.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6.9 Security Summary Table
|
||||||
|
|
||||||
|
| # | Issue | Severity | Exploitable Remotely | Fix Complexity |
|
||||||
|
|---|-------|----------|----------------------|----------------|
|
||||||
|
| 1 | Telegram bot token in process cmdline | CRITICAL | No (local access needed) | Easy |
|
||||||
|
| 2 | HTML injection in Telegram messages | HIGH | No (needs Polymarket compromise) | Easy |
|
||||||
|
| 3 | Insufficient `--search` sanitization | MEDIUM | Yes (any user input) | Easy |
|
||||||
|
| 4 | No bounds check on `--detail` | MEDIUM | Yes (any user input) | Trivial |
|
||||||
|
| 5 | Large API response can exhaust memory | MEDIUM | Yes (API or MITM) | Easy |
|
||||||
|
| 6 | Bare `except:` swallows errors | LOW | No | Trivial |
|
||||||
|
| 7 | No API rate limiting | LOW | Yes (with script access) | Medium |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6.10 Recommended Immediate Actions
|
||||||
|
|
||||||
|
**If you are currently using this skill with a Telegram bot:**
|
||||||
|
|
||||||
|
1. **ROTATE YOUR BOT TOKEN IMMEDIATELY** — Go to @BotFather and use `/revoke` to invalidate the current token. Generate a new one.
|
||||||
|
|
||||||
|
2. **Do not use the `--telegram` flag** on shared/multi-user systems until Issue #1 is fixed.
|
||||||
|
|
||||||
|
3. **Do not use `--search` with untrusted input** until Issue #3 is fixed.
|
||||||
|
|
||||||
|
4. Monitor your Telegram bot's `getUpdates` for unauthorized access.
|
||||||
|
|
||||||
|
**Safe usage until fixes are deployed:**
|
||||||
|
```bash
|
||||||
|
# Without Telegram (safe):
|
||||||
|
polymarket-browse --category "Counter Strike" --limit 5
|
||||||
|
|
||||||
|
# With Telegram (UNSAFE on shared systems until fix is deployed):
|
||||||
|
# NOT RECOMMENDED until security issues are addressed
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6.11 Fix Priority Order
|
||||||
|
|
||||||
|
1. **FIRST (CRITICAL):** Fix Telegram bot token exposure — use Python `requests` library instead of curl subprocess, OR use curl with `--config` file approach
|
||||||
|
2. **SECOND (HIGH):** Add HTML escaping for Telegram messages
|
||||||
|
3. **THIRD (MEDIUM):** Fix `--search` URL encoding with `urllib.parse.quote()`
|
||||||
|
4. **FOURTH (MEDIUM):** Add `--detail` bounds checking
|
||||||
|
5. **FIFTH (MEDIUM):** Add response size limits
|
||||||
|
6. **SIXTH (LOW):** Replace bare `except:` with specific exceptions
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Appendix A: Longest Lines in browse.py (for targeted refactoring)
|
||||||
|
|
||||||
|
| Line | Chars | Location | Content Summary |
|
||||||
|
|------|-------|----------|-----------------|
|
||||||
|
| 474 | 209 | `print_browse()` | Function signature |
|
||||||
|
| 564 | 152 | `print_detail()` | ML odds formatting |
|
||||||
|
| 571 | 136 | `print_detail()` | Market outcome formatting |
|
||||||
|
| 760 | 128 | `send_to_telegram()` | Telegram send call |
|
||||||
|
| 561 | 126 | `print_detail()` | Spread formatting |
|
||||||
|
| 736 | 122 | `send_to_telegram()` | Telegram API URL |
|
||||||
|
| 485 | 121 | `print_browse()` | Fetch stats line |
|
||||||
|
| 467 | 119 | `print_browse()` | Print category header |
|
||||||
|
| 728 | 112 | `send_to_telegram()` | Telegram send call |
|
||||||
|
| 569 | 110 | `print_detail()` | Market spread formatting |
|
||||||
|
|
||||||
|
**Key finding:** The `print_browse()` function signature itself (line 474) at 209 chars is the longest. This should be broken up or the function should accept a config dict instead of 11 parameters.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Appendix B: Duplicate Code Patterns
|
||||||
|
|
||||||
|
### B.1 Time formatting duplicated across 3 functions
|
||||||
|
|
||||||
|
| Function | Lines | Purpose |
|
||||||
|
|----------|-------|---------|
|
||||||
|
| `get_match_time_status()` | ~40 | Returns (status_str, urgency) tuple |
|
||||||
|
| `get_match_time_str()` | ~35 | Returns just status string |
|
||||||
|
| `get_start_time_wib()` | ~50 | Returns (abs_str, rel_str) tuple |
|
||||||
|
|
||||||
|
All three parse the same ISO datetime string and compute the same relative time logic. Should be consolidated into one function returning all needed values.
|
||||||
|
|
||||||
|
### B.2 `is_tradeable_event()` vs `is_tradeable_market()`
|
||||||
|
|
||||||
|
Both check convergence (bestBid >= 0.99, bestAsk <= 0.01) and acceptingOrders/closed status. The market-level one is simpler but they share the same convergence check logic.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Appendix C: Potential Bugs
|
||||||
|
|
||||||
|
### C.1 Bare `except:` clauses
|
||||||
|
|
||||||
|
Found at lines 169, 183, and similar locations:
|
||||||
|
```python
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
**Risk:** Catches KeyboardInterrupt, SystemExit, and json.JSONDecodeError. Should be:
|
||||||
|
```python
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
### C.2 Line 474: `print_browse()` signature is 209 characters
|
||||||
|
|
||||||
|
```python
|
||||||
|
def print_browse(match_events, non_match_events, category, total_raw, total_fetched, total_match, total_non_match, raw_mode=False, partial=False, non_matches_max=5, matches_only=False, non_matches_only=False):
|
||||||
|
```
|
||||||
|
|
||||||
|
**Issue:** 11 parameters is too many. Consider using a result dict or a config object.
|
||||||
|
|
||||||
|
**Fix options:**
|
||||||
|
1. Accept a `BrowseResult` namedtuple/dataclass
|
||||||
|
2. Split into `print_browse_header()` and `print_browse_sections()`
|
||||||
|
3. Use `**kwargs`
|
||||||
|
|
||||||
|
### C.3 Line 128 in `send_to_telegram()`: `bot_token=os.environ.get("BOT_` (truncated)
|
||||||
|
|
||||||
|
```python
|
||||||
|
bot_token=os.environ.get("BOT_TOKEN")
|
||||||
|
chat_id = os.environ.get("CHAT_ID")
|
||||||
|
```
|
||||||
|
|
||||||
|
This looks like a line that was cut off in the output but the actual code is fine. However, it highlights that the line at 582 is long.
|
||||||
|
|
||||||
|
### C.4 `chunk_len` variable unused
|
||||||
|
|
||||||
|
At line 681 in `send_to_telegram()`:
|
||||||
|
```python
|
||||||
|
chunk = []
|
||||||
|
chunk_len = 0 # NEVER USED
|
||||||
|
chunk_num = 1 # NEVER USED
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Appendix D: Missing Test Coverage
|
||||||
|
|
||||||
|
Functions that need tests but have none:
|
||||||
|
|
||||||
|
```
|
||||||
|
[x] fetch_page - needs mock curl response
|
||||||
|
[x] fetch_all_pages - needs mock paginated responses
|
||||||
|
[x] is_match_market - easy to test with dict inputs
|
||||||
|
[x] is_tradeable_event - complex, needs many test cases
|
||||||
|
[x] is_tradeable_market - similar to above
|
||||||
|
[x] is_bo2_tie - edge cases for BO2 detection
|
||||||
|
[x] get_ml_market - easy to test
|
||||||
|
[x] get_ml_volume - easy to test
|
||||||
|
[x] prob_to_cents - pure function, easy to test
|
||||||
|
[x] format_odds - pure function, easy to test
|
||||||
|
[x] format_spread - pure function, easy to test
|
||||||
|
[x] get_match_time_* - needs timezone mocking
|
||||||
|
[x] get_tournament - easy to test
|
||||||
|
[x] get_event_url - easy to test
|
||||||
|
[x] filter_events - easy to test
|
||||||
|
[x] sort_events - easy to test
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Report generated by Hermes Agent on 2026-03-25*
|
||||||
@@ -4,11 +4,13 @@ Polymarket Event Browser
|
|||||||
Browse tradeable Polymarket events by game category.
|
Browse tradeable Polymarket events by game category.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import subprocess
|
import html
|
||||||
import json
|
import json
|
||||||
import time
|
import time
|
||||||
import argparse
|
import argparse
|
||||||
from datetime import datetime, timezone, timedelta
|
from datetime import datetime, timezone, timedelta
|
||||||
|
from urllib.parse import urlencode
|
||||||
|
from urllib.request import urlopen, Request
|
||||||
|
|
||||||
# ============================================================
|
# ============================================================
|
||||||
# CONFIG
|
# CONFIG
|
||||||
@@ -17,6 +19,7 @@ from datetime import datetime, timezone, timedelta
|
|||||||
PAGE_SIZE = 50
|
PAGE_SIZE = 50
|
||||||
MAX_RETRIES = 5
|
MAX_RETRIES = 5
|
||||||
INITIAL_RETRY_DELAY = 2 # exponential backoff starts at 2s
|
INITIAL_RETRY_DELAY = 2 # exponential backoff starts at 2s
|
||||||
|
WIB = timezone(timedelta(hours=7)) # UTC+7 for Indonesian users
|
||||||
|
|
||||||
GAME_CATEGORIES = {
|
GAME_CATEGORIES = {
|
||||||
"All Esports": "Esports",
|
"All Esports": "Esports",
|
||||||
@@ -42,49 +45,65 @@ def fetch_page(q, page=1, max_retries=MAX_RETRIES, initial_delay=INITIAL_RETRY_D
|
|||||||
|
|
||||||
delay = initial_delay
|
delay = initial_delay
|
||||||
for attempt in range(max_retries):
|
for attempt in range(max_retries):
|
||||||
|
if attempt > 0:
|
||||||
time.sleep(delay)
|
time.sleep(delay)
|
||||||
r = subprocess.run(
|
|
||||||
["curl", "-s", url, "--max-time", "10", "-H", "User-Agent: curl/7.88.1"],
|
|
||||||
capture_output=True
|
|
||||||
)
|
|
||||||
|
|
||||||
if r.returncode == 0 and len(r.stdout) > 0:
|
|
||||||
try:
|
try:
|
||||||
return json.loads(r.stdout.decode('utf-8'))
|
req = Request(url, headers={"User-Agent": "Mozilla/5.0"})
|
||||||
except json.JSONDecodeError:
|
with urlopen(req, timeout=10) as r:
|
||||||
if attempt < max_retries - 1:
|
return json.loads(r.read())
|
||||||
delay *= 2 # Exponential backoff
|
except Exception:
|
||||||
continue
|
|
||||||
return None
|
|
||||||
else:
|
|
||||||
# Rate limit or other error - exponential backoff
|
|
||||||
if attempt < max_retries - 1:
|
if attempt < max_retries - 1:
|
||||||
delay *= 2
|
delay *= 2
|
||||||
continue
|
continue
|
||||||
return None
|
return None
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def fetch_all_pages(q, max_pages=100):
|
def fetch_all_pages(q, matches_max=None, non_matches_max=None):
|
||||||
"""
|
"""
|
||||||
Fetch ALL pages until pagination ends.
|
Fetch pages until pagination ends, or until quotas are satisfied.
|
||||||
max_pages is a safety cap to prevent infinite loops.
|
|
||||||
|
Args:
|
||||||
|
q: search query
|
||||||
|
matches_max: stop early once we have this many match events (None = no limit)
|
||||||
|
non_matches_max: stop early once we have this many non-match events (None = no limit)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
{"events": [...], "total_raw": N, "partial": bool}
|
||||||
"""
|
"""
|
||||||
all_events = []
|
all_events = []
|
||||||
total_raw = 0
|
total_raw = 0
|
||||||
for page in range(1, max_pages + 1):
|
match_count = 0
|
||||||
time.sleep(0.2) # small delay between pages (API rate limit is generous)
|
non_match_count = 0
|
||||||
|
page = 0
|
||||||
|
while True:
|
||||||
|
page += 1
|
||||||
|
time.sleep(0.2)
|
||||||
data = fetch_page(q, page)
|
data = fetch_page(q, page)
|
||||||
if data is None:
|
if data is None:
|
||||||
break
|
break
|
||||||
events = data.get("events", [])
|
events = data.get("events", [])
|
||||||
total_raw = data.get("pagination", {}).get("totalResults", 0)
|
total_raw = data.get("pagination", {}).get("totalResults", 0)
|
||||||
all_events.extend(events)
|
all_events.extend(events)
|
||||||
# Stop when we get 0 events (no more pages),
|
|
||||||
# OR when we've fetched >= total results
|
# Count matches/non-matches in this page
|
||||||
|
for e in events:
|
||||||
|
if is_match_market(e):
|
||||||
|
match_count += 1
|
||||||
|
else:
|
||||||
|
non_match_count += 1
|
||||||
|
|
||||||
|
# Stop if we got what we wanted (only when caps are set)
|
||||||
|
if matches_max is not None and non_matches_max is not None:
|
||||||
|
if match_count >= matches_max and non_match_count >= non_matches_max:
|
||||||
|
break
|
||||||
|
|
||||||
|
# Stop when we get 0 events (no more pages)
|
||||||
if len(events) == 0:
|
if len(events) == 0:
|
||||||
break
|
break
|
||||||
|
# Stop when we've fetched all known results
|
||||||
if len(all_events) >= total_raw:
|
if len(all_events) >= total_raw:
|
||||||
break
|
break
|
||||||
|
|
||||||
partial = (total_raw > 0 and len(all_events) < total_raw)
|
partial = (total_raw > 0 and len(all_events) < total_raw)
|
||||||
return {"events": all_events, "total_raw": total_raw, "partial": partial}
|
return {"events": all_events, "total_raw": total_raw, "partial": partial}
|
||||||
|
|
||||||
@@ -219,94 +238,79 @@ def format_spread(bid, ask):
|
|||||||
spread = ask - bid
|
spread = ask - bid
|
||||||
return f"{prob_to_cents(spread)}c"
|
return f"{prob_to_cents(spread)}c"
|
||||||
|
|
||||||
def get_match_time_status(e):
|
|
||||||
|
def _get_time_data(e, tz=None):
|
||||||
"""
|
"""
|
||||||
Return a human-readable match time status.
|
Unified time data extraction for event timestamps.
|
||||||
Returns (status_str, urgency) where urgency is 0-3 (higher = more urgent/live).
|
|
||||||
Uses startTime for actual match start time.
|
Uses startTime (preferred) or startDate as the event start time.
|
||||||
Displays times in WIB (UTC+7 for Indonesian users).
|
Datetime parsing and all relative calculations are UTC-based.
|
||||||
|
The tz parameter only affects the abs_time formatting.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
e: Event dict with 'startTime' or 'startDate' key.
|
||||||
|
tz: datetime.timezone for abs_time formatting.
|
||||||
|
Defaults to WIB (UTC+7).
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
{
|
||||||
|
"time_status": str, # e.g. "LIVE", "In 6h", "12h ago"
|
||||||
|
"time_urgency": int, # 0-3 (higher = more urgent/live)
|
||||||
|
"abs_time": str, # e.g. "Mar 25, 19:00 WIB" or "TBD"
|
||||||
|
}
|
||||||
"""
|
"""
|
||||||
# Use startTime for actual match start, not startDate (which is market creation time)
|
tz = tz or WIB
|
||||||
start_str = e.get("startTime") or e.get("startDate", "")
|
start_str = e.get("startTime") or e.get("startDate", "")
|
||||||
|
|
||||||
if not start_str:
|
if not start_str:
|
||||||
return "TBD", 0
|
return {"time_status": "TBD", "time_urgency": 0, "abs_time": "TBD"}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
start_dt = datetime.fromisoformat(start_str.replace('Z', '+00:00'))
|
start_dt = datetime.fromisoformat(start_str.replace('Z', '+00:00'))
|
||||||
now_utc = datetime.now(timezone.utc)
|
now_utc = datetime.now(timezone.utc)
|
||||||
utc7 = timezone(timedelta(hours=7))
|
|
||||||
now = now_utc.astimezone(utc7)
|
|
||||||
start_utc7 = start_dt.astimezone(utc7)
|
|
||||||
|
|
||||||
delta = start_dt - now_utc
|
delta = start_dt - now_utc
|
||||||
|
total_sec = delta.total_seconds()
|
||||||
|
|
||||||
if delta.total_seconds() < 0:
|
if total_sec < 0:
|
||||||
# Started already
|
# Event is in the past
|
||||||
hours_ago = abs(delta.total_seconds()) / 3600
|
hours_ago = abs(total_sec) / 3600
|
||||||
if hours_ago < 1:
|
if hours_ago < 1:
|
||||||
return "LIVE", 3
|
time_status = "LIVE"
|
||||||
|
time_urgency = 3
|
||||||
elif hours_ago < 4:
|
elif hours_ago < 4:
|
||||||
return f"LIVE {int(hours_ago)}h", 3
|
time_status = f"LIVE {int(hours_ago)}h"
|
||||||
|
time_urgency = 3
|
||||||
elif hours_ago < 24:
|
elif hours_ago < 24:
|
||||||
return f"Started {int(hours_ago)}h ago", 1
|
time_status = f"{int(hours_ago)}h ago"
|
||||||
|
time_urgency = 1
|
||||||
else:
|
else:
|
||||||
days = int(hours_ago / 24)
|
days = int(hours_ago / 24)
|
||||||
return f"{days}d ago", 0
|
time_status = f"{days}d ago"
|
||||||
|
time_urgency = 0
|
||||||
else:
|
else:
|
||||||
# Starts in future
|
# Event is in the future
|
||||||
hours_until = delta.total_seconds() / 3600
|
if total_sec < 3600:
|
||||||
if hours_until <= 0:
|
mins = int(total_sec / 60)
|
||||||
return "LIVE", 3
|
time_status = f"In {mins}m"
|
||||||
elif hours_until < 1:
|
time_urgency = 3
|
||||||
mins = int(delta.total_seconds() / 60)
|
elif total_sec < 86400:
|
||||||
return f"In {mins}m", 3
|
hours_until = int(total_sec / 3600)
|
||||||
elif hours_until < 24:
|
time_status = f"In {hours_until}h"
|
||||||
return f"In {int(hours_until)}h", 2
|
time_urgency = 2
|
||||||
else:
|
else:
|
||||||
days = int(hours_until / 24)
|
days = int(total_sec / 86400)
|
||||||
return f"In {days}d", 1
|
time_status = f"In {days}d"
|
||||||
except:
|
time_urgency = 1
|
||||||
return "", 0
|
|
||||||
|
|
||||||
def get_match_time_str(e):
|
abs_time = start_dt.astimezone(tz).strftime("%b %d, %H:%M ")
|
||||||
"""
|
if tz == WIB:
|
||||||
Return just the time status string (e.g. 'LIVE', 'In 6h', 'In 1d').
|
abs_time += "WIB"
|
||||||
Uses startTime for actual match start time.
|
else:
|
||||||
"""
|
abs_time += start_dt.astimezone(tz).strftime("%Z")
|
||||||
start_str = e.get("startTime") or e.get("startDate", "")
|
return {"time_status": time_status, "time_urgency": time_urgency, "abs_time": abs_time}
|
||||||
if not start_str:
|
except Exception:
|
||||||
return "TBD"
|
return {"time_status": "", "time_urgency": 0, "abs_time": "TBD"}
|
||||||
try:
|
|
||||||
start_dt = datetime.fromisoformat(start_str.replace('Z', '+00:00'))
|
|
||||||
now_utc = datetime.now(timezone.utc)
|
|
||||||
delta = start_dt - now_utc
|
|
||||||
|
|
||||||
if delta.total_seconds() < 0:
|
|
||||||
hours_ago = abs(delta.total_seconds()) / 3600
|
|
||||||
if hours_ago < 1:
|
|
||||||
return "LIVE"
|
|
||||||
elif hours_ago < 4:
|
|
||||||
return f"LIVE {int(hours_ago)}h"
|
|
||||||
elif hours_ago < 24:
|
|
||||||
return f"{int(hours_ago)}h ago"
|
|
||||||
else:
|
|
||||||
days = int(hours_ago / 24)
|
|
||||||
return f"{days}d ago"
|
|
||||||
else:
|
|
||||||
hours_until = delta.total_seconds() / 3600
|
|
||||||
if hours_until <= 0:
|
|
||||||
return "LIVE"
|
|
||||||
elif hours_until < 1:
|
|
||||||
mins = int(delta.total_seconds() / 60)
|
|
||||||
return f"In {mins}m"
|
|
||||||
elif hours_until < 24:
|
|
||||||
return f"In {int(hours_until)}h"
|
|
||||||
else:
|
|
||||||
days = int(hours_until / 24)
|
|
||||||
return f"In {days}d"
|
|
||||||
except:
|
|
||||||
return ""
|
|
||||||
|
|
||||||
def filter_events(events, tradeable_only=True):
|
def filter_events(events, tradeable_only=True):
|
||||||
"""
|
"""
|
||||||
@@ -325,6 +329,7 @@ def filter_events(events, tradeable_only=True):
|
|||||||
|
|
||||||
return match_events, non_match_events
|
return match_events, non_match_events
|
||||||
|
|
||||||
|
|
||||||
def sort_events(events):
|
def sort_events(events):
|
||||||
return sorted(events, key=get_ml_volume, reverse=True)
|
return sorted(events, key=get_ml_volume, reverse=True)
|
||||||
|
|
||||||
@@ -332,24 +337,214 @@ def sort_events(events):
|
|||||||
# BROWSE
|
# BROWSE
|
||||||
# ============================================================
|
# ============================================================
|
||||||
|
|
||||||
def browse_events(q, matches_max=10, non_matches_max=10, tradeable_only=True):
|
def browse_events(q, matches_max=10, non_matches_max=10, tradeable_only=True, sort_by=None):
|
||||||
result = fetch_all_pages(q)
|
"""
|
||||||
|
Browse Polymarket events.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
q: search query
|
||||||
|
matches_max: max number of match markets to return
|
||||||
|
non_matches_max: max number of non-match markets to return
|
||||||
|
tradeable_only: filter to tradeable events only
|
||||||
|
sort_by: None (fast, API order) or "volume" (full fetch, sort by volume desc)
|
||||||
|
"""
|
||||||
|
# Pass quotas to fetch_all_pages for early-exit optimization.
|
||||||
|
# Only use early-exit when sort_by is None (no client-side sort needed).
|
||||||
|
use_early_exit = (sort_by is None)
|
||||||
|
fetch_matches_max = matches_max if use_early_exit else None
|
||||||
|
fetch_non_matches_max = non_matches_max if use_early_exit else None
|
||||||
|
|
||||||
|
result = fetch_all_pages(q, matches_max=fetch_matches_max, non_matches_max=fetch_non_matches_max)
|
||||||
events = result["events"]
|
events = result["events"]
|
||||||
match_events, non_match_events = filter_events(events, tradeable_only)
|
match_events, non_match_events = filter_events(events, tradeable_only)
|
||||||
sorted_match = sort_events(match_events)
|
|
||||||
|
# Sort if requested; otherwise preserve API order
|
||||||
|
if sort_by == "volume":
|
||||||
|
match_events = sort_events(match_events)
|
||||||
|
non_match_events = sort_events(non_match_events)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"query": q,
|
"query": q,
|
||||||
"total_raw": result["total_raw"],
|
"total_raw": result["total_raw"],
|
||||||
"total_fetched": len(events),
|
"total_fetched": len(events),
|
||||||
"total_match": len(match_events),
|
"total_match": len(match_events),
|
||||||
"total_non_match": len(non_match_events),
|
"total_non_match": len(non_match_events),
|
||||||
"match_events": sorted_match[:matches_max],
|
"match_events": match_events[:matches_max],
|
||||||
"non_match_events": non_match_events[:non_matches_max],
|
"non_match_events": non_match_events[:non_matches_max],
|
||||||
"partial": result.get("partial", False),
|
"partial": result.get("partial", False),
|
||||||
}
|
}
|
||||||
|
|
||||||
# ============================================================
|
# ============================================================
|
||||||
# FORMAT
|
# FORMAT — EVENT
|
||||||
|
# ============================================================
|
||||||
|
|
||||||
|
def format_match_event(e):
|
||||||
|
"""
|
||||||
|
Format a match event into a canonical dict for rendering.
|
||||||
|
All computing done here; renderers just template.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
{
|
||||||
|
"title": str, # raw title
|
||||||
|
"title_clean": str, # "Team A vs Team B"
|
||||||
|
"tournament": str, # "Tournament Name" or ""
|
||||||
|
"url": str,
|
||||||
|
"time_status": str, # "LIVE", "In 6h", "12h ago"
|
||||||
|
"time_urgency": int, # 0-3
|
||||||
|
"abs_time": str, # "Mar 25, 19:00 WIB"
|
||||||
|
"team_a": str,
|
||||||
|
"team_b": str,
|
||||||
|
"odds_a": str, # "55c"
|
||||||
|
"odds_b": str,
|
||||||
|
"vol": int,
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
ml = get_ml_market(e)
|
||||||
|
outcomes = json.loads(ml.get("outcomes", "[]")) if ml else []
|
||||||
|
prices = json.loads(ml.get("outcomePrices", "[]")) if ml else []
|
||||||
|
td = _get_time_data(e)
|
||||||
|
title = e.get("title", "")
|
||||||
|
|
||||||
|
team_a = outcomes[0] if len(outcomes) > 0 else "?"
|
||||||
|
team_b = outcomes[1] if len(outcomes) > 1 else "?"
|
||||||
|
odds_a = format_odds(float(prices[0])) if len(prices) > 0 else "?"
|
||||||
|
odds_b = format_odds(float(prices[1])) if len(prices) > 1 else "?"
|
||||||
|
|
||||||
|
if " - " in title:
|
||||||
|
title_clean = title.split(" - ")[0].strip()
|
||||||
|
else:
|
||||||
|
title_clean = title
|
||||||
|
|
||||||
|
tournament = get_tournament(title)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"title": title,
|
||||||
|
"title_clean": title_clean,
|
||||||
|
"tournament": tournament,
|
||||||
|
"url": get_event_url(e),
|
||||||
|
"time_status": td["time_status"],
|
||||||
|
"time_urgency": td["time_urgency"],
|
||||||
|
"abs_time": td["abs_time"],
|
||||||
|
"team_a": team_a,
|
||||||
|
"team_b": team_b,
|
||||||
|
"odds_a": odds_a,
|
||||||
|
"odds_b": odds_b,
|
||||||
|
"vol": get_ml_volume(e),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def format_non_match_event(e):
|
||||||
|
"""
|
||||||
|
Format a non-match event into a canonical dict for rendering.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
{
|
||||||
|
"title": str,
|
||||||
|
"url": str,
|
||||||
|
"time_status": str,
|
||||||
|
"time_urgency": int,
|
||||||
|
"abs_time": str,
|
||||||
|
"market_count": int,
|
||||||
|
"total_vol": int,
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
td = _get_time_data(e)
|
||||||
|
total_vol = sum(float(m.get("volume", 0)) for m in e.get("markets", []))
|
||||||
|
market_count = len(e.get("markets", []))
|
||||||
|
|
||||||
|
return {
|
||||||
|
"title": e.get("title", "?"),
|
||||||
|
"url": get_event_url(e),
|
||||||
|
"time_status": td["time_status"],
|
||||||
|
"time_urgency": td["time_urgency"],
|
||||||
|
"abs_time": td["abs_time"],
|
||||||
|
"market_count": market_count,
|
||||||
|
"total_vol": int(total_vol),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================
|
||||||
|
# FORMAT — RENDER
|
||||||
|
# ============================================================
|
||||||
|
|
||||||
|
def render_match_lines(event_dict, i, mode):
|
||||||
|
"""
|
||||||
|
Render a formatted match event dict into lines of text.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
event_dict: canonical dict from format_match_event()
|
||||||
|
i: 1-based index for the event number
|
||||||
|
mode: "text" for plain text/Markdown, "html" for Telegram HTML
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List[str], one line per element (no trailing blank line).
|
||||||
|
Caller adds the blank line separator between events.
|
||||||
|
"""
|
||||||
|
title_clean = event_dict["title_clean"]
|
||||||
|
url = event_dict["url"]
|
||||||
|
abs_time = event_dict["abs_time"]
|
||||||
|
time_status = event_dict["time_status"]
|
||||||
|
vol = event_dict["vol"]
|
||||||
|
tournament = event_dict["tournament"]
|
||||||
|
team_a = event_dict["team_a"]
|
||||||
|
team_b = event_dict["team_b"]
|
||||||
|
odds_a = event_dict["odds_a"]
|
||||||
|
odds_b = event_dict["odds_b"]
|
||||||
|
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
if mode == "html":
|
||||||
|
lines.append(
|
||||||
|
f"<b>{i}.</b> <a href=\"{url}\">{escape_html(title_clean)}</a>"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
lines.append(f"{i}. [{title_clean}]({url})")
|
||||||
|
|
||||||
|
lines.append(f" {abs_time} | {time_status}")
|
||||||
|
lines.append(f" Vol: ${vol:,.0f}")
|
||||||
|
|
||||||
|
if tournament:
|
||||||
|
lines.append(f" Tournament: {tournament}")
|
||||||
|
|
||||||
|
lines.append(f" Odds: {team_a} {odds_a} | {odds_b} {team_b}")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def render_non_match_lines(event_dict, i, mode):
|
||||||
|
"""
|
||||||
|
Render a formatted non-match event dict into lines of text.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
event_dict: canonical dict from format_non_match_event()
|
||||||
|
i: 1-based index for the event number
|
||||||
|
mode: "text" for plain text/Markdown, "html" for Telegram HTML
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List[str], one line per element (no trailing blank line).
|
||||||
|
"""
|
||||||
|
title = event_dict["title"]
|
||||||
|
url = event_dict["url"]
|
||||||
|
abs_time = event_dict["abs_time"]
|
||||||
|
time_status = event_dict["time_status"]
|
||||||
|
market_count = event_dict["market_count"]
|
||||||
|
total_vol = event_dict["total_vol"]
|
||||||
|
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
if mode == "html":
|
||||||
|
lines.append(f"<b>{i}.</b> <a href=\"{url}\">{escape_html(title)}</a>")
|
||||||
|
else:
|
||||||
|
lines.append(f"{i}. [{title}]({url})")
|
||||||
|
|
||||||
|
lines.append(f" {abs_time} | {time_status}")
|
||||||
|
lines.append(f" Markets: {market_count} | Total Vol: ${total_vol:,.0f}")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================
|
||||||
|
# FORMAT — LEGACY
|
||||||
# ============================================================
|
# ============================================================
|
||||||
|
|
||||||
def format_event(e):
|
def format_event(e):
|
||||||
@@ -359,12 +554,12 @@ def format_event(e):
|
|||||||
best_bid = float(ml.get("bestBid", 0)) if ml else 0
|
best_bid = float(ml.get("bestBid", 0)) if ml else 0
|
||||||
best_ask = float(ml.get("bestAsk", 0)) if ml else 0
|
best_ask = float(ml.get("bestAsk", 0)) if ml else 0
|
||||||
vol = get_ml_volume(e)
|
vol = get_ml_volume(e)
|
||||||
time_status, urgency = get_match_time_status(e)
|
td = _get_time_data(e)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"title": e.get("title", ""),
|
"title": e.get("title", ""),
|
||||||
"time_status": time_status,
|
"time_status": td["time_status"],
|
||||||
"time_urgency": urgency,
|
"time_urgency": td["time_urgency"],
|
||||||
"url": get_event_url(e),
|
"url": get_event_url(e),
|
||||||
"livestream": e.get("resolutionSource"),
|
"livestream": e.get("resolutionSource"),
|
||||||
"outcomes": outcomes,
|
"outcomes": outcomes,
|
||||||
@@ -383,11 +578,12 @@ def format_detail_event(e):
|
|||||||
]
|
]
|
||||||
active_markets = sorted(active_markets, key=lambda m: float(m.get("volume", 0)), reverse=True)
|
active_markets = sorted(active_markets, key=lambda m: float(m.get("volume", 0)), reverse=True)
|
||||||
|
|
||||||
time_status, urgency = get_match_time_status(e)
|
td = _get_time_data(e)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"title": e.get("title", ""),
|
"title": e.get("title", ""),
|
||||||
"time_status": time_status,
|
"time_status": td["time_status"],
|
||||||
|
"abs_time": td["abs_time"],
|
||||||
"url": get_event_url(e),
|
"url": get_event_url(e),
|
||||||
"livestream": e.get("resolutionSource"),
|
"livestream": e.get("resolutionSource"),
|
||||||
"outcomes": json.loads(ml.get("outcomes", "[]")) if ml else [],
|
"outcomes": json.loads(ml.get("outcomes", "[]")) if ml else [],
|
||||||
@@ -414,48 +610,6 @@ def format_detail_event(e):
|
|||||||
# DISPLAY
|
# DISPLAY
|
||||||
# ============================================================
|
# ============================================================
|
||||||
|
|
||||||
def get_start_time_wib(e):
|
|
||||||
"""Return (date_time_str, relative_str) for display."""
|
|
||||||
start_str = e.get("startTime") or e.get("startDate", "")
|
|
||||||
if not start_str:
|
|
||||||
return "TBD", ""
|
|
||||||
try:
|
|
||||||
start_dt = datetime.fromisoformat(start_str.replace('Z', '+00:00'))
|
|
||||||
now_utc = datetime.now(timezone.utc)
|
|
||||||
utc7 = timezone(timedelta(hours=7))
|
|
||||||
start_utc7 = start_dt.astimezone(utc7)
|
|
||||||
|
|
||||||
# Absolute: "Mar 25, 19:00 WIB"
|
|
||||||
abs_str = start_utc7.strftime("%b %d, %H:%M WIB")
|
|
||||||
|
|
||||||
# Relative: "In 5h", "In 10h", "LIVE", etc.
|
|
||||||
delta = start_dt - now_utc
|
|
||||||
if delta.total_seconds() < 0:
|
|
||||||
hours_ago = abs(delta.total_seconds()) / 3600
|
|
||||||
if hours_ago < 1:
|
|
||||||
rel_str = "LIVE"
|
|
||||||
elif hours_ago < 24:
|
|
||||||
rel_str = f"{int(hours_ago)}h ago"
|
|
||||||
else:
|
|
||||||
days = int(hours_ago / 24)
|
|
||||||
rel_str = f"{days}d ago"
|
|
||||||
else:
|
|
||||||
hours_until = delta.total_seconds() / 3600
|
|
||||||
if hours_until <= 0:
|
|
||||||
rel_str = "LIVE"
|
|
||||||
elif hours_until < 1:
|
|
||||||
mins_until = int(delta.total_seconds() / 60)
|
|
||||||
rel_str = f"In {mins_until}m"
|
|
||||||
elif hours_until < 24:
|
|
||||||
rel_str = f"In {int(hours_until)}h"
|
|
||||||
else:
|
|
||||||
days = int(hours_until / 24)
|
|
||||||
rel_str = f"In {days}d"
|
|
||||||
|
|
||||||
return abs_str, rel_str
|
|
||||||
except:
|
|
||||||
return "TBD", ""
|
|
||||||
|
|
||||||
def get_header_date():
|
def get_header_date():
|
||||||
"""Return current date string like 'Mar 25, 2026'"""
|
"""Return current date string like 'Mar 25, 2026'"""
|
||||||
now_utc = datetime.now(timezone.utc)
|
now_utc = datetime.now(timezone.utc)
|
||||||
@@ -486,9 +640,8 @@ def print_browse(match_events, non_match_events, category, total_raw, total_fetc
|
|||||||
if partial:
|
if partial:
|
||||||
print(f"WARNING: Partial fetch (API error or timeout) — data may be incomplete")
|
print(f"WARNING: Partial fetch (API error or timeout) — data may be incomplete")
|
||||||
|
|
||||||
# --- MATCH MARKETS ---
|
# Determine sections to show
|
||||||
if not matches_only and not non_matches_only:
|
if not matches_only and not non_matches_only:
|
||||||
# Default: show both
|
|
||||||
show_matches = True
|
show_matches = True
|
||||||
show_non_matches = True
|
show_non_matches = True
|
||||||
elif matches_only:
|
elif matches_only:
|
||||||
@@ -498,68 +651,31 @@ def print_browse(match_events, non_match_events, category, total_raw, total_fetc
|
|||||||
show_matches = False
|
show_matches = False
|
||||||
show_non_matches = True
|
show_non_matches = True
|
||||||
|
|
||||||
|
# Match events
|
||||||
if show_matches:
|
if show_matches:
|
||||||
print(f"\nMATCH MARKETS")
|
print("\nMATCH MARKETS")
|
||||||
if not match_events:
|
if not match_events:
|
||||||
print(" No match markets found.")
|
print(" No match markets found.")
|
||||||
else:
|
else:
|
||||||
for i, e in enumerate(match_events, 1):
|
for i, e in enumerate(match_events, 1):
|
||||||
f = format_event(e)
|
fd = format_match_event(e)
|
||||||
ml = get_ml_market(e)
|
for line in render_match_lines(fd, i, mode="text"):
|
||||||
outcomes = json.loads(ml.get("outcomes", "[]")) if ml else []
|
print(line)
|
||||||
prices = json.loads(ml.get("outcomePrices", "[]")) if ml else []
|
|
||||||
vol = f["volume"]
|
|
||||||
title = f["title"]
|
|
||||||
url = f["url"]
|
|
||||||
start_time_wib, rel_time = get_start_time_wib(e)
|
|
||||||
|
|
||||||
team_a = outcomes[0] if len(outcomes) > 0 else "?"
|
# Non-match events
|
||||||
team_b = outcomes[1] if len(outcomes) > 1 else "?"
|
|
||||||
odds_a = format_odds(float(prices[0])) if len(prices) > 0 else "?"
|
|
||||||
odds_b = format_odds(float(prices[1])) if len(prices) > 1 else "?"
|
|
||||||
|
|
||||||
if " - " in title:
|
|
||||||
title_clean = title.split(" - ")[0].strip()
|
|
||||||
else:
|
|
||||||
title_clean = title
|
|
||||||
|
|
||||||
tournament = get_tournament(title)
|
|
||||||
|
|
||||||
print(f"\n {i}. [{title_clean}]({url})")
|
|
||||||
print(f" {start_time_wib} | {rel_time}")
|
|
||||||
print(f" Vol: ${vol:,.0f}")
|
|
||||||
if tournament:
|
|
||||||
print(f" Tournament: {tournament}")
|
|
||||||
print(f" Odds: {team_a} {odds_a} | {odds_b} {team_b}")
|
|
||||||
|
|
||||||
# --- NON-MATCH MARKETS ---
|
|
||||||
if show_non_matches and non_match_events:
|
if show_non_matches and non_match_events:
|
||||||
print(f"\nNON-MATCH MARKETS")
|
print("\nNON-MATCH MARKETS")
|
||||||
|
|
||||||
for i, e in enumerate(non_match_events[:non_matches_max], 1):
|
for i, e in enumerate(non_match_events[:non_matches_max], 1):
|
||||||
title = e.get("title", "?")
|
fd = format_non_match_event(e)
|
||||||
url = get_event_url(e)
|
for line in render_non_match_lines(fd, i, mode="text"):
|
||||||
start_time_wib, rel_time = get_start_time_wib(e)
|
print(line)
|
||||||
|
|
||||||
total_vol = sum(float(m.get("volume", 0)) for m in e.get("markets", []))
|
|
||||||
market_count = len(e.get("markets", []))
|
|
||||||
|
|
||||||
print(f"\n {i}. [{title}]({url})")
|
|
||||||
print(f" {start_time_wib} | {rel_time}")
|
|
||||||
print(f" Markets: {market_count} | Total Vol: ${total_vol:,.0f}")
|
|
||||||
|
|
||||||
def print_detail(e, detail):
|
def print_detail(e, detail):
|
||||||
from datetime import datetime, timezone, timedelta
|
|
||||||
now_utc = datetime.now(timezone.utc)
|
|
||||||
utc7 = timezone(timedelta(hours=7))
|
|
||||||
now_utc7 = now_utc.astimezone(utc7)
|
|
||||||
|
|
||||||
print(f"\n{detail['title']}")
|
print(f"\n{detail['title']}")
|
||||||
print(f"URL: {detail['url']}")
|
print(f"URL: {detail['url']}")
|
||||||
print(f"Livestream: {detail['livestream']}")
|
print(f"Livestream: {detail['livestream']}")
|
||||||
|
|
||||||
spread_str = format_spread(detail["best_bid"], detail["best_ask"]) if detail["best_bid"] and detail["best_ask"] else "N/A"
|
spread_str = format_spread(detail["best_bid"], detail["best_ask"]) if detail["best_bid"] and detail["best_ask"] else "N/A"
|
||||||
time_str = get_match_time_str(e)
|
|
||||||
print(f"\n{detail['time_status']}")
|
print(f"\n{detail['time_status']}")
|
||||||
print(f"ML: {detail['outcomes'][0]} {format_odds(float(detail['prices'][0]))} vs {detail['outcomes'][1]} {format_odds(float(detail['prices'][1]))}")
|
print(f"ML: {detail['outcomes'][0]} {format_odds(float(detail['prices'][0]))} vs {detail['outcomes'][1]} {format_odds(float(detail['prices'][1]))}")
|
||||||
print(f"ML Vol: ${detail['volume']:,.0f} | {spread_str}")
|
print(f"ML Vol: ${detail['volume']:,.0f} | {spread_str}")
|
||||||
@@ -576,14 +692,44 @@ def print_detail(e, detail):
|
|||||||
# TELEGRAM
|
# TELEGRAM
|
||||||
# ============================================================
|
# ============================================================
|
||||||
|
|
||||||
|
def escape_html(text):
|
||||||
|
"""Escape HTML-sensitive characters for Telegram parse_mode=HTML."""
|
||||||
|
return (text
|
||||||
|
.replace("&", "&")
|
||||||
|
.replace("<", "<")
|
||||||
|
.replace(">", ">")
|
||||||
|
.replace('"', """))
|
||||||
|
|
||||||
|
|
||||||
|
def send_telegram_message(bot_token, chat_id, text, timeout=10):
|
||||||
|
"""Send a message via Telegram bot API. Returns the message ID on success.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
RuntimeError: If the Telegram API returns an error (e.g. invalid token, rate limit).
|
||||||
|
URLError/HTTPError: On network or HTTP-level failures.
|
||||||
|
"""
|
||||||
|
url = f"https://api.telegram.org/bot{bot_token}/sendMessage"
|
||||||
|
data = urlencode({
|
||||||
|
"chat_id": chat_id,
|
||||||
|
"text": text,
|
||||||
|
"parse_mode": "HTML",
|
||||||
|
"disable_web_page_preview": "true",
|
||||||
|
}).encode("utf-8")
|
||||||
|
req = Request(url, data=data, method="POST")
|
||||||
|
with urlopen(req, timeout=timeout) as resp:
|
||||||
|
result = json.loads(resp.read())
|
||||||
|
if not result.get("ok"):
|
||||||
|
raise RuntimeError(f"Telegram API error: {result.get('description')}")
|
||||||
|
return result["result"]["message_id"]
|
||||||
|
|
||||||
|
|
||||||
def send_to_telegram(match_events, non_match_events, category, matches_only=False, non_matches_only=False):
|
def send_to_telegram(match_events, non_match_events, category, matches_only=False, non_matches_only=False):
|
||||||
"""Send browse results to Telegram. Reads BOT_TOKEN and CHAT_ID from environment."""
|
"""Send browse results to Telegram. Reads TELEGRAM_BOT_TOKEN and CHAT_ID from environment."""
|
||||||
import os
|
import os
|
||||||
bot_token = os.environ.get("BOT_TOKEN")
|
bot_token = os.environ.get("TELEGRAM_BOT_TOKEN")
|
||||||
chat_id = os.environ.get("CHAT_ID")
|
chat_id = os.environ.get("CHAT_ID")
|
||||||
if not bot_token or not chat_id:
|
if not bot_token or not chat_id:
|
||||||
print("WARNING: BOT_TOKEN or CHAT_ID not set in environment. Skipping Telegram send.")
|
raise RuntimeError("TELEGRAM_BOT_TOKEN or CHAT_ID not set in environment")
|
||||||
return
|
|
||||||
|
|
||||||
from datetime import datetime, timezone, timedelta
|
from datetime import datetime, timezone, timedelta
|
||||||
now_utc = datetime.now(timezone.utc)
|
now_utc = datetime.now(timezone.utc)
|
||||||
@@ -596,92 +742,71 @@ def send_to_telegram(match_events, non_match_events, category, matches_only=Fals
|
|||||||
show_non_matches = (not matches_only and not non_matches_only) or non_matches_only
|
show_non_matches = (not matches_only and not non_matches_only) or non_matches_only
|
||||||
|
|
||||||
def send(text):
|
def send(text):
|
||||||
result = subprocess.run(
|
msg_id = send_telegram_message(bot_token, chat_id, text)
|
||||||
["curl", "-s", f"https://api.telegram.org/bot{bot_token}/sendMessage",
|
print(f" Sent msg {msg_id}")
|
||||||
"-d", f"chat_id={chat_id}",
|
|
||||||
"-d", f"text={text}",
|
|
||||||
"-d", "parse_mode=HTML",
|
|
||||||
"-d", "disable_web_page_preview=true"],
|
|
||||||
capture_output=True
|
|
||||||
)
|
|
||||||
resp = json.loads(result.stdout.decode())
|
|
||||||
if resp.get("ok"):
|
|
||||||
print(f" Sent msg {resp['result']['message_id']}")
|
|
||||||
else:
|
|
||||||
print(f" Error: {resp.get('description')}")
|
|
||||||
|
|
||||||
# Build sections
|
# Build lines
|
||||||
lines = [f"<b>{category.upper()}</b> | {header_date}"]
|
lines = [f"<b>{category.upper()}</b> | {header_date}", ""]
|
||||||
lines.append("")
|
|
||||||
|
|
||||||
if show_matches:
|
if show_matches:
|
||||||
lines.append("MATCH MARKETS")
|
lines += ["MATCH MARKETS", ""]
|
||||||
lines.append("")
|
|
||||||
if not match_events:
|
if not match_events:
|
||||||
lines.append(" No match markets found.")
|
lines.append(" No match markets found.")
|
||||||
else:
|
else:
|
||||||
for i, e in enumerate(match_events, 1):
|
for i, e in enumerate(match_events, 1):
|
||||||
ml = get_ml_market(e)
|
fd = format_match_event(e)
|
||||||
outcomes = json.loads(ml.get("outcomes", "[]")) if ml else []
|
lines += render_match_lines(fd, i, mode="html")
|
||||||
prices = json.loads(ml.get("outcomePrices", "[]")) if ml else []
|
|
||||||
vol = get_ml_volume(e)
|
|
||||||
title = e.get("title", "?")
|
|
||||||
url = get_event_url(e)
|
|
||||||
start_time_wib, rel_time = get_start_time_wib(e)
|
|
||||||
team_a = outcomes[0] if len(outcomes) > 0 else "?"
|
|
||||||
team_b = outcomes[1] if len(outcomes) > 1 else "?"
|
|
||||||
odds_a = format_odds(float(prices[0])) if len(prices) > 0 else "?"
|
|
||||||
odds_b = format_odds(float(prices[1])) if len(prices) > 1 else "?"
|
|
||||||
tournament = get_tournament(title)
|
|
||||||
title_clean = title.split(" - ")[0].strip() if " - " in title else title
|
|
||||||
lines.append(f"<b>{i}.</b> <a href=\"{url}\">{title_clean}</a>")
|
|
||||||
lines.append(f" {start_time_wib} | {rel_time}")
|
|
||||||
lines.append(f" Vol: ${vol:,.0f}")
|
|
||||||
if tournament:
|
|
||||||
lines.append(f" Tournament: {tournament}")
|
|
||||||
lines.append(f" Odds: {team_a} {odds_a} | {odds_b} {team_b}")
|
|
||||||
lines.append("")
|
lines.append("")
|
||||||
lines.append("")
|
lines.append("")
|
||||||
|
|
||||||
if show_non_matches:
|
if show_non_matches:
|
||||||
lines.append("NON-MATCH MARKETS")
|
lines += ["NON-MATCH MARKETS", ""]
|
||||||
lines.append("")
|
|
||||||
if not non_match_events:
|
if not non_match_events:
|
||||||
lines.append(" No non-match markets found.")
|
lines.append(" No non-match markets found.")
|
||||||
else:
|
else:
|
||||||
for i, e in enumerate(non_match_events, 1):
|
for i, e in enumerate(non_match_events, 1):
|
||||||
title = e.get("title", "?")
|
fd = format_non_match_event(e)
|
||||||
url = get_event_url(e)
|
lines += render_non_match_lines(fd, i, mode="html")
|
||||||
start_time_wib, rel_time = get_start_time_wib(e)
|
lines.append("")
|
||||||
total_vol = sum(float(m.get("volume", 0)) for m in e.get("markets", []))
|
|
||||||
market_count = len(e.get("markets", []))
|
|
||||||
lines.append(f"<b>{i}.</b> <a href=\"{url}\">{title}</a>")
|
|
||||||
lines.append(f" {start_time_wib} | {rel_time}")
|
|
||||||
lines.append(f" Markets: {market_count} | Total Vol: ${total_vol:,.0f}")
|
|
||||||
lines.append("")
|
lines.append("")
|
||||||
|
|
||||||
# Chunk by 10 items (events), respecting 4096 char Telegram limit
|
# Chunk and send
|
||||||
text = "\n".join(lines)
|
send_chunked(lines, send, category, header_date, show_matches, show_non_matches)
|
||||||
|
|
||||||
|
|
||||||
|
def send_chunked(all_lines, send_fn, category, header_date, show_matches, show_non_matches):
|
||||||
|
"""
|
||||||
|
Split already-built lines into Telegram-safe chunks and send them.
|
||||||
|
|
||||||
|
Telegram messages are capped at 4096 chars. Chunks are grouped by
|
||||||
|
section header so no event is split across messages.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
all_lines: Full message lines list (built by caller).
|
||||||
|
send_fn: Closure that sends a single string and prints confirmation.
|
||||||
|
category: Category name for header.
|
||||||
|
header_date: Date string for header.
|
||||||
|
show_matches: Whether MATCH MARKETS section is present.
|
||||||
|
show_non_matches: Whether NON-MATCH MARKETS section is present.
|
||||||
|
"""
|
||||||
|
text = "\n".join(all_lines)
|
||||||
if len(text) <= 4096:
|
if len(text) <= 4096:
|
||||||
send(text)
|
send_fn(text)
|
||||||
return
|
return
|
||||||
|
|
||||||
# Split into chunks of 10 events
|
# Split into chunks of 10 events, respecting section headers
|
||||||
all_items = []
|
all_items = []
|
||||||
in_match = True
|
in_match = True
|
||||||
for line in lines:
|
for line in all_lines:
|
||||||
if line == "MATCH MARKETS":
|
if line == "MATCH MARKETS":
|
||||||
in_match = True
|
in_match = True
|
||||||
elif line == "NON-MATCH MARKETS":
|
elif line == "NON-MATCH MARKETS":
|
||||||
in_match = False
|
in_match = False
|
||||||
elif line.startswith("<b>") and ". " in line and "</a>" in line:
|
elif line.startswith("<b>") and "</a>" in line:
|
||||||
|
# Event title line: <b>1.</b> <a href="...">Title</a>
|
||||||
all_items.append((in_match, line))
|
all_items.append((in_match, line))
|
||||||
|
|
||||||
chunk = []
|
chunk = []
|
||||||
chunk_len = 0
|
|
||||||
chunk_num = 1
|
|
||||||
|
|
||||||
# Header is always first
|
|
||||||
header = f"<b>{category.upper()}</b> | {header_date}\n"
|
header = f"<b>{category.upper()}</b> | {header_date}\n"
|
||||||
if show_matches:
|
if show_matches:
|
||||||
header += "\nMATCH MARKETS\n\n"
|
header += "\nMATCH MARKETS\n\n"
|
||||||
@@ -692,9 +817,8 @@ def send_to_telegram(match_events, non_match_events, category, matches_only=Fals
|
|||||||
test_chunk = chunk + [item_line, ""]
|
test_chunk = chunk + [item_line, ""]
|
||||||
test_text = header + "\n".join(chunk) + "\n".join(test_chunk)
|
test_text = header + "\n".join(chunk) + "\n".join(test_chunk)
|
||||||
if len(test_text) > 4096 or len(chunk) >= 10:
|
if len(test_text) > 4096 or len(chunk) >= 10:
|
||||||
# Send current chunk
|
|
||||||
msg = header + "\n".join(chunk)
|
msg = header + "\n".join(chunk)
|
||||||
send(msg)
|
send_fn(msg)
|
||||||
chunk = [item_line, ""]
|
chunk = [item_line, ""]
|
||||||
header = f"<b>{category.upper()}</b> (cont.) | {header_date}\n"
|
header = f"<b>{category.upper()}</b> (cont.) | {header_date}\n"
|
||||||
if show_matches and is_match:
|
if show_matches and is_match:
|
||||||
@@ -706,7 +830,7 @@ def send_to_telegram(match_events, non_match_events, category, matches_only=Fals
|
|||||||
|
|
||||||
if chunk:
|
if chunk:
|
||||||
msg = header + "\n".join(chunk)
|
msg = header + "\n".join(chunk)
|
||||||
send(msg)
|
send_fn(msg)
|
||||||
|
|
||||||
|
|
||||||
# ============================================================
|
# ============================================================
|
||||||
@@ -737,7 +861,7 @@ def main():
|
|||||||
parser.add_argument("--raw", action="store_true",
|
parser.add_argument("--raw", action="store_true",
|
||||||
help="Show all events without tradeable filter (for debugging).")
|
help="Show all events without tradeable filter (for debugging).")
|
||||||
parser.add_argument("--telegram", action="store_true",
|
parser.add_argument("--telegram", action="store_true",
|
||||||
help="Send results to Telegram (BOT_TOKEN and CHAT_ID must be set in environment).")
|
help="Send results to Telegram (TELEGRAM_BOT_TOKEN and CHAT_ID must be set in environment).")
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
if args.list_categories:
|
if args.list_categories:
|
||||||
|
|||||||
1
skills/polymarket-browse/tests/__init__.py
Normal file
1
skills/polymarket-browse/tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
|||||||
|
# Tests package
|
||||||
1209
skills/polymarket-browse/tests/test_browse.py
Normal file
1209
skills/polymarket-browse/tests/test_browse.py
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user