Jobs
Track optimization status, browse job history, and download result artifacts.
Job Lifecycle
Every optimization job progresses through a fixed set of states.
queued
Job is accepted and waiting for a worker to pick it up.
running
Optimization is actively being computed. Market data has been fetched.
uploading
Computation complete. Results and artifacts are being uploaded to storage.
succeeded
All methods completed. Artifacts are ready for download.
failed
An error occurred. Check error_code and error_detail for diagnostics.
/jobs/{run_id}Check the current status of an optimization job.
Authentication
Required — you must own the run (the job must have been submitted under your account).
Path Parameters
| Parameter | Type | Description |
|---|---|---|
| run_id | UUID | The unique identifier returned when the job was submitted. |
Response
Successful job:
{
"run_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"status": "succeeded",
"created_at": "2025-01-15T10:30:00Z",
"started_at": "2025-01-15T10:30:02Z",
"finished_at": "2025-01-15T10:31:14Z",
"error_code": null,
"error_message": null,
"error_detail": null
}Failed job:
{
"run_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"status": "failed",
"created_at": "2025-01-15T10:30:00Z",
"started_at": "2025-01-15T10:30:02Z",
"finished_at": "2025-01-15T10:30:08Z",
"error_code": 50002,
"error_message": "DATA_FETCH_ERROR",
"error_detail": "Unable to fetch price data for ticker INVALIDTICKER.NSE"
}curl
curl https://api.portfolioopt.in/jobs/a1b2c3d4-e5f6-7890-abcd-ef1234567890 \ -H "Authorization: Bearer <YOUR_TOKEN>"
Python
import requests BASE_URL = "https://api.portfolioopt.in" TOKEN = "<YOUR_TOKEN>" RUN_ID = "a1b2c3d4-e5f6-7890-abcd-ef1234567890" headers = {"Authorization": f"Bearer {TOKEN}"} response = requests.get(f"{BASE_URL}/jobs/{RUN_ID}", headers=headers) response.raise_for_status() job = response.json() print(f"Status: {job['status']}") if job["status"] == "failed": print(f"Error: {job['error_message']} - {job['error_detail']}")
/jobs/historyList your optimization job history with cursor-based pagination.
Authentication
Required — returns only jobs belonging to the authenticated user.
Query Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| limit | integer | 20 | Number of items per page. Maximum 100. |
| cursor | string | — | Opaque pagination cursor. Pass the next_cursor value from the previous response to fetch the next page. |
| q | string | — | Search by run_id prefix. Useful for quickly finding a specific job. |
Response
{
"items": [
{
"run_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"status": "succeeded",
"created_at": "2025-01-15T10:30:00Z",
"methods": ["MVO", "HRP"],
"stock_count": 5
},
{
"run_id": "f9e8d7c6-b5a4-3210-fedc-ba9876543210",
"status": "running",
"created_at": "2025-01-15T11:00:00Z",
"methods": ["MinVol"],
"stock_count": 8
}
],
"next_cursor": "eyJjIjoiMjAyNS0wMS0xNVQxMDozMDowMFoiLCJpIjoiYTFiMmMzZDQifQ=="
}Pagination note
This endpoint uses cursor-based pagination. When next_cursor is null, you have reached the last page. Do not attempt to construct or decode cursor values — treat them as opaque strings.
curl
# First page curl "https://api.portfolioopt.in/jobs/history?limit=10" \ -H "Authorization: Bearer <YOUR_TOKEN>" # Next page (use next_cursor from previous response) curl "https://api.portfolioopt.in/jobs/history?limit=10&cursor=eyJjIjoiMjAyNS0wMS..." \ -H "Authorization: Bearer <YOUR_TOKEN>"
Python
import requests BASE_URL = "https://api.portfolioopt.in" TOKEN = "<YOUR_TOKEN>" headers = {"Authorization": f"Bearer {TOKEN}"} # Fetch all jobs using cursor-based pagination all_jobs = [] cursor = None while True: params = {"limit": 20} if cursor: params["cursor"] = cursor resp = requests.get(f"{BASE_URL}/jobs/history", params=params, headers=headers) resp.raise_for_status() data = resp.json() all_jobs.extend(data["items"]) cursor = data.get("next_cursor") if not cursor: break print(f"Total jobs: {len(all_jobs)}") for job in all_jobs: print(f" {job['run_id'][:8]}... {job['status']} ({job['stock_count']} stocks)")
/jobs/{run_id}/artifactsRetrieve downloadable artifacts (charts, data files, reports) for a completed job.
Authentication
Required — you must own the run.
Path Parameters
| Parameter | Type | Description |
|---|---|---|
| run_id | UUID | The job identifier. Job must have status succeeded. |
Response
{
"artifacts": [
{
"artifact_id": "art_001",
"method": "MVO",
"kind": "returns_dist",
"label": "MVO Returns Distribution",
"signed_url": "https://storage.portfolioopt.in/artifacts/art_001?sig=...",
"content_type": "image/png"
},
{
"artifact_id": "art_002",
"method": "MVO",
"kind": "max_drawdown",
"label": "MVO Maximum Drawdown",
"signed_url": "https://storage.portfolioopt.in/artifacts/art_002?sig=...",
"content_type": "image/png"
},
{
"artifact_id": "art_global_001",
"method": null,
"kind": "covariance_heatmap",
"label": "Covariance Heatmap",
"signed_url": "https://storage.portfolioopt.in/artifacts/art_global_001?sig=...",
"content_type": "image/png"
},
{
"artifact_id": "art_global_002",
"method": null,
"kind": "optimization_report",
"label": "Optimization Report (Excel)",
"signed_url": "https://storage.portfolioopt.in/artifacts/art_global_002?sig=...",
"content_type": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
}
],
"data_parquets": {
"benchmark_returns": "https://storage.portfolioopt.in/parquet/bench_001?sig=...",
"cumulative_returns": "https://storage.portfolioopt.in/parquet/cum_001?sig=...",
"stock_yearly_returns": "https://storage.portfolioopt.in/parquet/yearly_001?sig=..."
},
"data_files": {
"chart_bundle": "https://storage.portfolioopt.in/bundles/chart_bundle_001.json.gz?sig=..."
}
}Artifact Types
| Kind | Scope | Content Type | Description |
|---|---|---|---|
| data | Global | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Excel report (label: "optimization_report") |
| data | Global | application/json | Pre-computed chart data bundle — gzip-compressed JSON (label: "chart_bundle") |
| report_pdf | Global | application/pdf | PDF report (current format). Legacy runs may use kind="data" + label="optimization_report_pdf" |
| data | Global | application/parquet | Benchmark daily returns (label: "benchmark_returns") |
| data | Global | application/parquet | Cumulative portfolio returns (label: "cumulative_returns") |
| data | Global | application/parquet | Yearly returns per stock (label: "stock_yearly_returns") |
Signed URL expiry
All signed_url values expire after 1 hour. If you need to download artifacts after expiry, call this endpoint again to receive fresh URLs.
curl
curl https://api.portfolioopt.in/jobs/a1b2c3d4-e5f6-7890-abcd-ef1234567890/artifacts \ -H "Authorization: Bearer <YOUR_TOKEN>"
Python — Download All Artifacts
import requests import os BASE_URL = "https://api.portfolioopt.in" TOKEN = "<YOUR_TOKEN>" RUN_ID = "a1b2c3d4-e5f6-7890-abcd-ef1234567890" headers = {"Authorization": f"Bearer {TOKEN}"} # Fetch artifact metadata resp = requests.get(f"{BASE_URL}/jobs/{RUN_ID}/artifacts", headers=headers) resp.raise_for_status() data = resp.json() # Download each artifact using its signed URL output_dir = f"./results/{RUN_ID[:8]}" os.makedirs(output_dir, exist_ok=True) for artifact in data["artifacts"]: ext = "png" if "png" in artifact["content_type"] else "xlsx" filename = f"{artifact['kind']}_{artifact['method'] or 'global'}.{ext}" filepath = os.path.join(output_dir, filename) print(f"Downloading {artifact['label']}...") download = requests.get(artifact["signed_url"]) download.raise_for_status() with open(filepath, "wb") as f: f.write(download.content) # Download Parquet data files for name, url in data["data_parquets"].items(): filepath = os.path.join(output_dir, f"{name}.parquet") print(f"Downloading {name}.parquet...") download = requests.get(url) with open(filepath, "wb") as f: f.write(download.content) print(f"\nAll artifacts saved to {output_dir}/")
Error Codes
Job-specific error codes returned in the error_code and error_message fields.
| Code | Name | Description |
|---|---|---|
| 40401 | NOT_FOUND | Job not found or not owned by the authenticated user |
| 50001 | OPTIMIZATION_FAILED | Optimization computation failed during processing |
| 50002 | DATA_FETCH_ERROR | Could not fetch market data for one or more tickers |
| 50099 | UNEXPECTED_ERROR | An unexpected server error occurred |