Conversation
|
Important Review skippedAuto reviews are limited based on label configuration. 🏷️ Required labels (at least one) (1)
Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis PR implements batch-based document processing for collections. It adds database columns for batch tracking, refactors the collection creation workflow into a two-phase batch orchestration system, updates the provider interface to support batch uploads, replaces OpenTelemetry tracing with correlation IDs, and adds gevent-based timeout support to Celery tasks. ChangesBatch-Based Collection Processing
Sequence DiagramsequenceDiagram
participant API as API Endpoint
participant CeleryWorker as Celery Worker
participant CollectionSvc as Collection Service
participant Provider as OpenAI Provider
participant VectorStore as Vector Store API
participant Storage as Cloud Storage
participant Database as Database
API->>CeleryWorker: start_create_collection_job(...)
CeleryWorker->>CollectionSvc: execute_setup_job() [Phase 1]
CollectionSvc->>Database: fetch documents
CollectionSvc->>Storage: upload_files(docs)
Storage-->>CollectionSvc: files ready
CollectionSvc->>Provider: create(docs, vector_store_id=None) [Phase 1]
Provider->>VectorStore: create vector store
VectorStore-->>Provider: vector_store_id
Provider->>VectorStore: update_batch(docs)
VectorStore-->>Provider: succeeded[], failed[]
CollectionSvc->>Database: create collection_job (PROCESSING)
CollectionSvc->>CollectionSvc: split docs into batches
CollectionSvc->>CeleryWorker: start_collection_batch_job(batch_1)
CeleryWorker->>CollectionSvc: execute_batch_job(batch_n) [Phase 2]
loop for each remaining batch
CollectionSvc->>Provider: create(docs, vector_store_id) [Phase 2, is_final=False]
CollectionSvc->>CeleryWorker: start_collection_batch_job(batch_n+1)
end
CollectionSvc->>Provider: create(final_docs, vector_store_id, is_final=True) [Phase 2]
CollectionSvc->>Database: finalize collection, link documents
CollectionSvc->>API: send_callback(success)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
backend/app/services/collections/helpers.py (1)
84-99:⚠️ Potential issue | 🟠 Major | ⚡ Quick winReplace the implicit
TypeErrorwith an explicit validation error.Removing the
or 0fallback means a document withfile_size_kb=Nonenow crashes inside the batching loop with an opaqueunsupported operand type(s) for +: 'int' and 'NoneType'mid-iteration. Callers cannot tell which document is invalid and any batches accumulated up to that point are discarded. A pre-loop validation (or explicit per-doc check) yields a clear message and a deterministic failure point.🛡️ Proposed fix
for doc in documents: - doc_size_kb = doc.file_size_kb + if doc.file_size_kb is None: + raise ValueError( + f"[batch_documents] Document {doc.id} has no file_size_kb; " + "sizes must be backfilled before batching." + ) + doc_size_kb = doc.file_size_kb🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/services/collections/helpers.py` around lines 84 - 99, The batching loop in batch_documents (the for doc in documents loop using current_batch and current_batch_size_kb) can raise an opaque TypeError when doc.file_size_kb is None; add explicit validation for each doc before using it (either a pre-loop scan or a per-doc check) that verifies file_size_kb is not None and is a numeric type, and if invalid raise a clear ValueError that includes an identifier (e.g., doc.id or doc.name) so callers know which document failed; perform this validation before updating current_batch_size_kb so existing batches are preserved and add a short logger.warning or logger.error with the same diagnostic information when raising.backend/app/crud/rag/open_ai.py (1)
119-151:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRemove the unused
updatemethod fromOpenAIVectorStoreCrud.This method is not called anywhere in the codebase and has been replaced by
update_batch. Additionally, it's missing a return type hint, which violates the coding guideline requiring type hints on all function return values. Removing it eliminates redundant code and the maintenance burden of two divergent upload flows.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/crud/rag/open_ai.py` around lines 119 - 151, Delete the unused OpenAIVectorStoreCrud.update method (the entire function) since upload logic is now handled by update_batch; after removal, run a quick search for any remaining references to OpenAIVectorStoreCrud.update and remove them, and clean up any now-unused imports or symbols used only by that method (e.g., BytesIO, Document, CloudStorage) to avoid lints and type-hint violations.backend/app/services/collections/create_collection.py (2)
174-303: 🛠️ Refactor suggestion | 🟠 Major | ⚡ Quick winAdd type hints for
task_instance(and tighten helper hints).Per the coding guidelines, all function parameters and return values must have type hints. The following are missing/loose:
execute_setup_job(... task_instance, ...) -> None—task_instancelacks a typeexecute_batch_job(... task_instance, ...) -> None— same_persist_succeeded_docs(succeeded: list, ...)— should belist[Document]_retry_failed_uploads(vector_store_crud, ..., failed_docs: list, ...)—vector_store_crudlacks a type,failed_docsshould belist[Document]
task_instancecan be typed ascelery.Task(or kept asAnyfromtypingif you want to avoid the dependency leak).As per coding guidelines, "Always add type hints to all function parameters and return values in Python code".
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/services/collections/create_collection.py` around lines 174 - 303, The functions are missing/loose type hints: add an explicit type for task_instance in both execute_setup_job and execute_batch_job (use celery.Task or typing.Any if you want to avoid importing Celery), and tighten helper signatures so _persist_succeeded_docs uses succeeded: list[Document] and _retry_failed_uploads uses failed_docs: list[Document] and type-hint vector_store_crud to the actual CRUD class (e.g., VectorStoreCrud) or typing.Any if that class isn't accessible; also import any needed names (Document, Any, celery.Task) and update return annotations if necessary.
39-66:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winReturn type mismatch: declared
-> strbut returns aUUID.
collection_job_idis aUUID(per the parameter annotation on line 43); returning it directly violates the declared-> str. Cast or change the annotation.🐛 Proposed fix
- return collection_job_id + return str(collection_job_id)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/services/collections/create_collection.py` around lines 39 - 66, The function start_job currently declares a return type of -> str but returns collection_job_id which is a UUID; fix by either changing the function signature to return -> UUID or converting the returned value to a string with return str(collection_job_id). Update any imports/annotations if you choose UUID (e.g., ensure UUID is imported) and keep the rest of the logic (calls to CollectionJobCrud.update and start_create_collection_job) unchanged.backend/app/services/collections/providers/openai.py (1)
23-28:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUpdate test calls to match new
createsignature.The test suite in
backend/app/tests/services/collections/providers/test_openai_provider.pyhas three test functions that callprovider.create()with the old three-argument signature:
test_create_openai_vector_store_only()(line 40)test_create_openai_with_assistant()(line 79)test_create_propagates_exception()(line 143)All three pass
storageas the second argument and a documents list as the third, but the updated signature iscreate(collection_request, docs, vector_store_id=None, is_final=False). The tests need to pass the documents list as the second argument, notstorage:
- Change from:
provider.create(collection_request, storage, documents)- Change to:
provider.create(collection_request, documents)(with vector_store_id as named argument if needed)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/services/collections/providers/openai.py` around lines 23 - 28, Update the three failing tests so they call the new create signature: replace calls to provider.create(collection_request, storage, documents) with provider.create(collection_request, documents) and, if a vector_store_id or is_final was intended, pass those as named args (e.g. provider.create(collection_request, documents, vector_store_id=..., is_final=...)); modify the three test functions in backend/app/tests/services/collections/providers/test_openai_provider.py (test_create_openai_vector_store_only, test_create_openai_with_assistant, test_create_propagates_exception) to pass the documents list as the second parameter and remove the positional storage argument.
🧹 Nitpick comments (4)
backend/app/models/document.py (1)
49-53: ⚡ Quick winAlign column comment between model and migration.
Migration
055sets the column comment to"File ID assigned by the LLM provider (e.g. OpenAI file ID) to avoid re-uploading", but the model declares it as"File ID assigned by OpenAI (avoid re-uploading)". Futurealembic revision --autogenerateruns may flag this drift as an unintended schema change. Pick one wording and keep both in sync.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/models/document.py` around lines 49 - 53, The model field openai_file_id's sa_column_kwargs comment string mismatches the migration; update the Field definition for openai_file_id in the Document model to use the exact comment used in migration 055 ("File ID assigned by the LLM provider (e.g. OpenAI file ID) to avoid re-uploading") so the sa_column_kwargs comment and the migration stay in sync and prevent autogenerate diffs.backend/app/alembic/versions/055_add_batch_tracking_to_collections_jobs.py (1)
47-55: 💤 Low valueMigration name only mentions
collection_jobs, but it also altersdocument.The filename and revision message refer to
collection_jobsonly, while the upgrade also addsdocument.openai_file_id. Consider splitting into two migrations or renaming/updating the message so the change scope is discoverable from the migration filename and history.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/alembic/versions/055_add_batch_tracking_to_collections_jobs.py` around lines 47 - 55, The migration '055_add_batch_tracking_to_collections_jobs.py' declares changes for collection_jobs but also adds a column to document (op.add_column adding document.openai_file_id); either split the document change into a separate migration or rename/update this migration's filename and revision message to reflect both changes (and update the upgrade/revision docstring) so the history accurately describes the addition of document.openai_file_id alongside the collection_jobs alterations.backend/app/services/collections/providers/openai.py (1)
47-52: ⚡ Quick winOpen one DB session for the whole batch, not one per document.
The current code opens a fresh
Session(engine)and constructs aDocumentCrudfor every successful upload. For a collection with hundreds/thousands of docs this multiplies connection overhead unnecessarily. A single session outside the loop with per-doc commits (or a single commit at the end if you don't need partial-progress durability) is cleaner.♻️ Proposed refactor
- def upload_files( + def upload_files( self, storage: CloudStorage, docs: list[Document], project_id: int, ) -> None: - for doc in docs: - if self.get_existing_file_id(doc): - continue - try: - content = storage.get(doc.object_store_url) - if doc.file_size_kb is None: - doc.file_size_kb = round(len(content) / 1024, 2) - f_obj = BytesIO(content) - f_obj.name = doc.fname - uploaded = self.client.files.create(file=f_obj, purpose="assistants") - doc.openai_file_id = uploaded.id - with Session(engine) as session: - document_crud = DocumentCrud(session, project_id) - db_doc = document_crud.read_one(doc.id) - db_doc.openai_file_id = uploaded.id - db_doc.file_size_kb = doc.file_size_kb - document_crud.update(db_doc) - except Exception as err: - ... + with Session(engine) as session: + document_crud = DocumentCrud(session, project_id) + for doc in docs: + if self.get_existing_file_id(doc): + continue + content = storage.get(doc.object_store_url) + if doc.file_size_kb is None: + doc.file_size_kb = round(len(content) / 1024, 2) + f_obj = BytesIO(content) + f_obj.name = doc.fname + uploaded = self.client.files.create(file=f_obj, purpose="assistants") + doc.openai_file_id = uploaded.id + db_doc = document_crud.read_one(doc.id) + db_doc.openai_file_id = uploaded.id + db_doc.file_size_kb = doc.file_size_kb + document_crud.update(db_doc)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/services/collections/providers/openai.py` around lines 47 - 52, The code currently creates a new Session(engine) and DocumentCrud for every uploaded document; instead open a single Session(engine) outside the upload loop and reuse it (and a DocumentCrud instance per project_id) for each doc, calling document_crud.read_one(doc.id), updating db_doc.openai_file_id and db_doc.file_size_kb, and then document_crud.update(db_doc) inside the loop; perform either a session.commit() per document for partial durability or one commit after the loop, and ensure the session is closed once after processing the entire batch.backend/app/services/collections/create_collection.py (1)
475-491: ⚡ Quick winChange
except BaseExceptiontoexcept Exception.
BaseExceptioncatchesKeyboardInterrupt,SystemExit, andGeneratorExit, which should normally be allowed to propagate. Additionally, gevent'sTimeoutdeliberately inherits fromBaseException(notException), so this generic handler will swallow timeouts that escape the dedicatedexcept Timeouthandler above and incorrectly mark the job as failed. Useexcept Exceptioninstead.♻️ Proposed change
- except BaseException as err: + except Exception as err: logger.error( "[create_collection.execute_batch_job] Batch %d failed | job_id=%s, error=%s", ...🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/services/collections/create_collection.py` around lines 475 - 491, The catch-all in create_collection.execute_batch_job currently uses "except BaseException as err" which improperly catches KeyboardInterrupt/SystemExit and gevent Timeouts; change that handler to "except Exception as err" so only regular exceptions are caught (leaving the earlier "except Timeout" and system-exiting signals to propagate), and keep the existing logging, _mark_job_failed, and callback logic unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@backend/app/celery/tasks/job_execution.py`:
- Around line 74-105: The gevent_timeout decorator currently raises TimeoutError
unconditionally in its finally block causing tasks like
run_create_collection_job and run_collection_batch_job to always fail; modify
gevent_timeout (the decorator implementation) so that the Timeout exception is
raised only inside the except Timeout: handler and the finally: block only calls
timeout.cancel() (no raise), ensuring timeout.cancel() is reachable and
successful task completions do not raise TimeoutError.
In `@backend/app/celery/utils.py`:
- Around line 185-208: gevent_timeout currently always raises TimeoutError and
never cancels the gevent Timeout; fix wrapper in gevent_timeout by tracking
whether the gevent Timeout fired (e.g., timed_out flag and optionally store
result/exception), don't unconditionally raise in finally, always call
timeout.cancel() in the finally block, and only raise TimeoutError (or re-raise
the stored Timeout) after timeout.cancel() if timed_out is true; reference
wrapper, Timeout, timeout.cancel(), task_name and func.__name__ to locate where
to apply the change.
In `@backend/app/crud/rag/open_ai.py`:
- Around line 158-163: The docstring for the batch upload method incorrectly
refers to provider_file_id; update it to reference the actual Document attribute
used in the code (doc.openai_file_id) so the docstring matches the
implementation (see the method that calls upload_and_poll / the loop that reads
doc.openai_file_id). Ensure the sentence now states that all docs must have
openai_file_id set before calling this method and return description remains
unchanged.
- Around line 182-190: In OpenAIVectorStoreCrud.update_batch, when
batch.file_counts.failed > 0, don't mark all docs for retry; call the OpenAI
helper client.beta.vector_stores.file_batches.list_files(batch_id=batch.id,
vector_store_id=vector_store_id, filter="failed") to get only failed file
entries, map those failed file identifiers back to the input docs list (using
the same file id/key used when building docs), and extend the failed list with
only those docs so upload_and_poll() is retried only for genuinely failed files
instead of the entire batch.
In `@backend/app/services/collections/create_collection.py`:
- Around line 122-172: The two helper functions _persist_succeeded_docs and
_retry_failed_uploads (and the stale docstring reference to
_upload_batch_with_retry) are dead code and OpenAIVectorStoreCrud is unused;
either wire them into the batch path (execute_setup_job / execute_batch_job) or
remove them. Fix by removing the unused helpers _persist_succeeded_docs and
_retry_failed_uploads and the OpenAIVectorStoreCrud import, and update the
execute_batch_job docstring to not reference _upload_batch_with_retry;
alternatively, if you intend to keep retry logic, add calls from
execute_batch_job/execute_setup_job to _retry_failed_uploads (and ensure
vector_store_crud is passed) and implement or rename _upload_batch_with_retry
accordingly so the docstring matches the implemented function.
- Around line 304-311: Update the Phase 2 docstring to remove the reference to
the non-existent _upload_batch_with_retry and instead describe the actual
behavior: that the code calls provider.create(...) which delegates to
vector_store_crud.update_batch, and that inline retries are handled by
_retry_failed_uploads (if used) or by the underlying vector_store_crud; ensure
the docstring accurately states that failed items are retried via
_retry_failed_uploads or the vector_store_crud retry semantics, and that the
function still checkpoints progress, queues next batch, and finalizes the
collection on the last batch.
- Around line 215-220: The log call in create_collection.execute_setup_job uses
four format specifiers but only passes job_id and len(flat_docs), causing a
runtime TypeError; update the logger.info call to either (A) reduce the format
string to match the two provided args (e.g., remove failed and duration_s
placeholders) or (B) compute and supply the missing values by timing the
upload_files call and getting a failed count (modify upload_files to return a
result struct with failed_count and have execute_setup_job measure duration_s
and pass job_id, len(flat_docs), failed_count, duration_s into logger.info).
Ensure the change references logger.info and the upload_files/flat_docs
variables so the log formatting and values are consistent.
- Around line 243-253: The first batch enqueue call to
start_collection_batch_job is missing the required vector_store_id expected by
execute_batch_job, causing a TypeError; fix it by passing vector_store_id=None
in the start_collection_batch_job invocation (where project_id/job_id/trace_id
are passed) so execute_batch_job receives the argument, or alternatively add a
default vector_store_id: Optional[...] = None to execute_batch_job's signature;
reference start_collection_batch_job and execute_batch_job when making the
change.
In `@backend/app/services/collections/providers/openai.py`:
- Around line 30-59: The upload_files loop in OpenAIProvider.upload_files
currently logs per-document exceptions and continues, leaving docs with None
file_size_kb/openai_file_id and causing downstream TypeError or silent failures;
modify upload_files to either (A) fail-fast by re-raising the caught exception
after logging so callers (e.g., create_collection.execute_setup_job) can stop
and surface the real error, or (B) accumulate per-doc failures into a structured
result (e.g., list of successes and failures) and return that to callers so they
can decide (and avoid passing docs without openai_file_id to
vector_store_crud.update_batch); update the function signature and callers
accordingly (refer to upload_files, create_collection.execute_setup_job, and
vector_store_crud.update_batch) so callers handle the returned error info or the
propagated exception.
---
Outside diff comments:
In `@backend/app/crud/rag/open_ai.py`:
- Around line 119-151: Delete the unused OpenAIVectorStoreCrud.update method
(the entire function) since upload logic is now handled by update_batch; after
removal, run a quick search for any remaining references to
OpenAIVectorStoreCrud.update and remove them, and clean up any now-unused
imports or symbols used only by that method (e.g., BytesIO, Document,
CloudStorage) to avoid lints and type-hint violations.
In `@backend/app/services/collections/create_collection.py`:
- Around line 174-303: The functions are missing/loose type hints: add an
explicit type for task_instance in both execute_setup_job and execute_batch_job
(use celery.Task or typing.Any if you want to avoid importing Celery), and
tighten helper signatures so _persist_succeeded_docs uses succeeded:
list[Document] and _retry_failed_uploads uses failed_docs: list[Document] and
type-hint vector_store_crud to the actual CRUD class (e.g., VectorStoreCrud) or
typing.Any if that class isn't accessible; also import any needed names
(Document, Any, celery.Task) and update return annotations if necessary.
- Around line 39-66: The function start_job currently declares a return type of
-> str but returns collection_job_id which is a UUID; fix by either changing the
function signature to return -> UUID or converting the returned value to a
string with return str(collection_job_id). Update any imports/annotations if you
choose UUID (e.g., ensure UUID is imported) and keep the rest of the logic
(calls to CollectionJobCrud.update and start_create_collection_job) unchanged.
In `@backend/app/services/collections/helpers.py`:
- Around line 84-99: The batching loop in batch_documents (the for doc in
documents loop using current_batch and current_batch_size_kb) can raise an
opaque TypeError when doc.file_size_kb is None; add explicit validation for each
doc before using it (either a pre-loop scan or a per-doc check) that verifies
file_size_kb is not None and is a numeric type, and if invalid raise a clear
ValueError that includes an identifier (e.g., doc.id or doc.name) so callers
know which document failed; perform this validation before updating
current_batch_size_kb so existing batches are preserved and add a short
logger.warning or logger.error with the same diagnostic information when
raising.
In `@backend/app/services/collections/providers/openai.py`:
- Around line 23-28: Update the three failing tests so they call the new create
signature: replace calls to provider.create(collection_request, storage,
documents) with provider.create(collection_request, documents) and, if a
vector_store_id or is_final was intended, pass those as named args (e.g.
provider.create(collection_request, documents, vector_store_id=...,
is_final=...)); modify the three test functions in
backend/app/tests/services/collections/providers/test_openai_provider.py
(test_create_openai_vector_store_only, test_create_openai_with_assistant,
test_create_propagates_exception) to pass the documents list as the second
parameter and remove the positional storage argument.
---
Nitpick comments:
In `@backend/app/alembic/versions/055_add_batch_tracking_to_collections_jobs.py`:
- Around line 47-55: The migration
'055_add_batch_tracking_to_collections_jobs.py' declares changes for
collection_jobs but also adds a column to document (op.add_column adding
document.openai_file_id); either split the document change into a separate
migration or rename/update this migration's filename and revision message to
reflect both changes (and update the upgrade/revision docstring) so the history
accurately describes the addition of document.openai_file_id alongside the
collection_jobs alterations.
In `@backend/app/models/document.py`:
- Around line 49-53: The model field openai_file_id's sa_column_kwargs comment
string mismatches the migration; update the Field definition for openai_file_id
in the Document model to use the exact comment used in migration 055 ("File ID
assigned by the LLM provider (e.g. OpenAI file ID) to avoid re-uploading") so
the sa_column_kwargs comment and the migration stay in sync and prevent
autogenerate diffs.
In `@backend/app/services/collections/create_collection.py`:
- Around line 475-491: The catch-all in create_collection.execute_batch_job
currently uses "except BaseException as err" which improperly catches
KeyboardInterrupt/SystemExit and gevent Timeouts; change that handler to "except
Exception as err" so only regular exceptions are caught (leaving the earlier
"except Timeout" and system-exiting signals to propagate), and keep the existing
logging, _mark_job_failed, and callback logic unchanged.
In `@backend/app/services/collections/providers/openai.py`:
- Around line 47-52: The code currently creates a new Session(engine) and
DocumentCrud for every uploaded document; instead open a single Session(engine)
outside the upload loop and reuse it (and a DocumentCrud instance per
project_id) for each doc, calling document_crud.read_one(doc.id), updating
db_doc.openai_file_id and db_doc.file_size_kb, and then
document_crud.update(db_doc) inside the loop; perform either a session.commit()
per document for partial durability or one commit after the loop, and ensure the
session is closed once after processing the entire batch.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 14146497-9eeb-46d5-94d9-fe7751afed6a
📒 Files selected for processing (12)
backend/app/alembic/versions/055_add_batch_tracking_to_collections_jobs.pybackend/app/api/docs/documents/upload.mdbackend/app/celery/tasks/job_execution.pybackend/app/celery/utils.pybackend/app/crud/rag/open_ai.pybackend/app/models/collection_job.pybackend/app/models/document.pybackend/app/services/collections/create_collection.pybackend/app/services/collections/helpers.pybackend/app/services/collections/providers/base.pybackend/app/services/collections/providers/openai.pybackend/app/tests/services/collections/test_helpers.py
| """ | ||
| Attach a batch of documents to the vector store via a single upload_and_poll call. | ||
|
|
||
| All docs must have provider_file_id set before calling this method. | ||
| Returns (succeeded, failed) — failed docs should be retried in the next batch. | ||
| """ |
There was a problem hiding this comment.
Docstring references a field name that doesn't exist.
The docstring says docs must have provider_file_id set, but the code reads doc.openai_file_id (Line 175) and that's the field defined on the Document model. Update the docstring to use the actual attribute name.
📝 Proposed fix
- All docs must have provider_file_id set before calling this method.
+ All docs must have openai_file_id set before calling this method.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| """ | |
| Attach a batch of documents to the vector store via a single upload_and_poll call. | |
| All docs must have provider_file_id set before calling this method. | |
| Returns (succeeded, failed) — failed docs should be retried in the next batch. | |
| """ | |
| """ | |
| Attach a batch of documents to the vector store via a single upload_and_poll call. | |
| All docs must have openai_file_id set before calling this method. | |
| Returns (succeeded, failed) — failed docs should be retried in the next batch. | |
| """ |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/app/crud/rag/open_ai.py` around lines 158 - 163, The docstring for
the batch upload method incorrectly refers to provider_file_id; update it to
reference the actual Document attribute used in the code (doc.openai_file_id) so
the docstring matches the implementation (see the method that calls
upload_and_poll / the loop that reads doc.openai_file_id). Ensure the sentence
now states that all docs must have openai_file_id set before calling this method
and return description remains unchanged.
| logger.info( | ||
| "[create_collection.execute_setup_job] All file uploads complete | " | ||
| "job_id=%s, total=%d, failed=%d, duration_s=%.2f", | ||
| job_id, | ||
| len(flat_docs), | ||
| ) |
There was a problem hiding this comment.
Logger format string has 4 placeholders but only 2 arguments — runtime TypeError.
The format string contains %s, total=%d, failed=%d, duration_s=%.2f (four placeholders), but you only pass job_id and len(flat_docs). At runtime Python's logging will raise TypeError: not enough arguments for format string while formatting; the log will not be emitted as intended (only an internal logging error is written to stderr). You also reference a failed count and duration_s that are never computed in this function.
🐛 Proposed fix
- logger.info(
- "[create_collection.execute_setup_job] All file uploads complete | "
- "job_id=%s, total=%d, failed=%d, duration_s=%.2f",
- job_id,
- len(flat_docs),
- )
+ logger.info(
+ "[create_collection.execute_setup_job] All file uploads complete | "
+ "job_id=%s, total=%d",
+ job_id,
+ len(flat_docs),
+ )If you actually want failed/duration_s, change upload_files to return a result struct and time the call so those values exist.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger.info( | |
| "[create_collection.execute_setup_job] All file uploads complete | " | |
| "job_id=%s, total=%d, failed=%d, duration_s=%.2f", | |
| job_id, | |
| len(flat_docs), | |
| ) | |
| logger.info( | |
| "[create_collection.execute_setup_job] All file uploads complete | " | |
| "job_id=%s, total=%d", | |
| job_id, | |
| len(flat_docs), | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/app/services/collections/create_collection.py` around lines 215 -
220, The log call in create_collection.execute_setup_job uses four format
specifiers but only passes job_id and len(flat_docs), causing a runtime
TypeError; update the logger.info call to either (A) reduce the format string to
match the two provided args (e.g., remove failed and duration_s placeholders) or
(B) compute and supply the missing values by timing the upload_files call and
getting a failed count (modify upload_files to return a result struct with
failed_count and have execute_setup_job measure duration_s and pass job_id,
len(flat_docs), failed_count, duration_s into logger.info). Ensure the change
references logger.info and the upload_files/flat_docs variables so the log
formatting and values are consistent.
| start_collection_batch_job( | ||
| project_id=project_id, | ||
| job_id=job_id, | ||
| trace_id=trace_id, | ||
| batch_number=1, | ||
| batch_doc_ids=batch_doc_ids[0], | ||
| remaining_batches=batch_doc_ids[1:], | ||
| request=request, | ||
| with_assistant=with_assistant, | ||
| organization_id=organization_id, | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect start_collection_batch_job and start_create_collection_job signatures and how they forward kwargs
rg -nP -C8 'def\s+start_collection_batch_job\b|def\s+start_create_collection_job\b' --type=pyRepository: ProjectTech4DevAI/kaapi-backend
Length of output: 2005
🏁 Script executed:
# Find execute_setup_job and check what it passes to start_collection_batch_job
rg -nP -A 30 'def execute_setup_job' backend/app/services/collections/ --type=py | head -100Repository: ProjectTech4DevAI/kaapi-backend
Length of output: 2824
🏁 Script executed:
# Find execute_batch_job signature to check vector_store_id parameter
rg -nP -A 5 'def execute_batch_job' backend/app/services/collections/ --type=pyRepository: ProjectTech4DevAI/kaapi-backend
Length of output: 556
🏁 Script executed:
# Find run_collection_batch_job Celery task definition
rg -nP -A 10 'def run_collection_batch_job' backend/app/celery/tasks/ --type=pyRepository: ProjectTech4DevAI/kaapi-backend
Length of output: 927
🏁 Script executed:
# Get full execute_batch_job signature and first few lines
rg -nP -A 20 'def execute_batch_job' backend/app/services/collections/create_collection.py --type=pyRepository: ProjectTech4DevAI/kaapi-backend
Length of output: 906
🏁 Script executed:
# Read the specific lines 243-253 in create_collection.py to see the actual call
sed -n '240,260p' backend/app/services/collections/create_collection.pyRepository: ProjectTech4DevAI/kaapi-backend
Length of output: 698
🏁 Script executed:
# Also check run_collection_batch_job to see if it processes kwargs
rg -nP -A 20 'def run_collection_batch_job' backend/app/celery/tasks/job_execution.py --type=pyRepository: ProjectTech4DevAI/kaapi-backend
Length of output: 763
🏁 Script executed:
# Check if vector_store_id is set anywhere in execute_setup_job before the call
rg -nP -B 50 'start_collection_batch_job' backend/app/services/collections/create_collection.py | grep -E '(vector_store_id|def execute_setup_job)' | head -20Repository: ProjectTech4DevAI/kaapi-backend
Length of output: 238
🏁 Script executed:
# Check the full execute_setup_job to see if vector_store_id is created
sed -n '174,250p' backend/app/services/collections/create_collection.py | grep -n 'vector_store_id'Repository: ProjectTech4DevAI/kaapi-backend
Length of output: 57
🏁 Script executed:
# Get the full execute_setup_job function to see vector_store_id handling
sed -n '174,290p' backend/app/services/collections/create_collection.pyRepository: ProjectTech4DevAI/kaapi-backend
Length of output: 3915
Pass vector_store_id in the first batch enqueue call.
The call to start_collection_batch_job at lines 243–253 does not pass vector_store_id, but execute_batch_job requires it (line 299, no default). The first batch will fail with TypeError: execute_batch_job() missing 1 required positional argument: 'vector_store_id'.
Either provide a default of None to execute_batch_job's vector_store_id parameter, or pass vector_store_id=None explicitly in the start_collection_batch_job call.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/app/services/collections/create_collection.py` around lines 243 -
253, The first batch enqueue call to start_collection_batch_job is missing the
required vector_store_id expected by execute_batch_job, causing a TypeError; fix
it by passing vector_store_id=None in the start_collection_batch_job invocation
(where project_id/job_id/trace_id are passed) so execute_batch_job receives the
argument, or alternatively add a default vector_store_id: Optional[...] = None
to execute_batch_job's signature; reference start_collection_batch_job and
execute_batch_job when making the change.
| """ | ||
| Phase 2: Upload one batch of documents to the vector store. | ||
| - Uploads the batch; any failures within the batch are retried inline by _upload_batch_with_retry | ||
| - Raises immediately if all retries for the batch are exhausted | ||
| - Checkpoints progress to the DB | ||
| - If more batches remain, queues the next batch task | ||
| - If this is the last batch, finalizes: creates Collection, links docs, marks job SUCCESSFUL | ||
| """ |
There was a problem hiding this comment.
Docstring references a function that doesn't exist.
The docstring claims "any failures within the batch are retried inline by _upload_batch_with_retry", but that function isn't defined in this file. The actual code calls provider.create(...) (line 354) which delegates to vector_store_crud.update_batch; whatever retry semantics live there are not visible here. Either implement inline retry (you have _retry_failed_uploads already, see prior comment) or update the docstring to reflect what really happens.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/app/services/collections/create_collection.py` around lines 304 -
311, Update the Phase 2 docstring to remove the reference to the non-existent
_upload_batch_with_retry and instead describe the actual behavior: that the code
calls provider.create(...) which delegates to vector_store_crud.update_batch,
and that inline retries are handled by _retry_failed_uploads (if used) or by the
underlying vector_store_crud; ensure the docstring accurately states that failed
items are retried via _retry_failed_uploads or the vector_store_crud retry
semantics, and that the function still checkpoints progress, queues next batch,
and finalizes the collection on the last batch.
| def upload_files( | ||
| self, | ||
| storage: CloudStorage, | ||
| docs: list[Document], | ||
| project_id: int, | ||
| ) -> None: | ||
| for doc in docs: | ||
| if self.get_existing_file_id(doc): | ||
| continue | ||
| try: | ||
| content = storage.get(doc.object_store_url) | ||
| if doc.file_size_kb is None: | ||
| doc.file_size_kb = round(len(content) / 1024, 2) | ||
| f_obj = BytesIO(content) | ||
| f_obj.name = doc.fname | ||
| uploaded = self.client.files.create(file=f_obj, purpose="assistants") | ||
| doc.openai_file_id = uploaded.id | ||
| with Session(engine) as session: | ||
| document_crud = DocumentCrud(session, project_id) | ||
| db_doc = document_crud.read_one(doc.id) | ||
| db_doc.openai_file_id = uploaded.id | ||
| db_doc.file_size_kb = doc.file_size_kb | ||
| document_crud.update(db_doc) | ||
| except Exception as err: | ||
| logger.error( | ||
| "[OpenAIProvider.upload_files] Failed to upload file | doc_id=%s, error=%s", | ||
| doc.id, | ||
| str(err), | ||
| exc_info=True, | ||
| ) |
There was a problem hiding this comment.
upload_files silently swallows per-document failures, leaving inconsistent state.
When storage.get or client.files.create raises, the exception is logged but not re-raised. The loop proceeds to the next doc, leaving doc.openai_file_id (and possibly doc.file_size_kb) unset on the failed doc. Two concrete downstream consequences:
- In
create_collection.execute_setup_job(line 222),total_size_kb = sum(doc.file_size_kb for doc in flat_docs)will raiseTypeError: unsupported operand type(s) for +: 'float' and 'NoneType'for any doc that failed upload, masking the real OpenAI/storage error with a confusing arithmetic error. - The doc without an
openai_file_idstill gets passed intovector_store_crud.update_batch, leading to silent data loss or a separate failure inside the vector-store upload path.
Either fail fast on the first error, or collect failures and surface a structured result so callers can decide. Continuing past failures while logging is the worst of both worlds.
🛡️ Suggested approach
def upload_files(
self,
storage: CloudStorage,
docs: list[Document],
project_id: int,
) -> None:
+ failed: list[tuple[Document, Exception]] = []
for doc in docs:
if self.get_existing_file_id(doc):
continue
try:
content = storage.get(doc.object_store_url)
...
except Exception as err:
logger.error(
"[OpenAIProvider.upload_files] Failed to upload file | doc_id=%s, error=%s",
doc.id,
str(err),
exc_info=True,
)
+ failed.append((doc, err))
+ if failed:
+ ids = [str(d.id) for d, _ in failed]
+ raise RuntimeError(f"Failed to upload {len(failed)} document(s): {ids}")🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/app/services/collections/providers/openai.py` around lines 30 - 59,
The upload_files loop in OpenAIProvider.upload_files currently logs per-document
exceptions and continues, leaving docs with None file_size_kb/openai_file_id and
causing downstream TypeError or silent failures; modify upload_files to either
(A) fail-fast by re-raising the caught exception after logging so callers (e.g.,
create_collection.execute_setup_job) can stop and surface the real error, or (B)
accumulate per-doc failures into a structured result (e.g., list of successes
and failures) and return that to callers so they can decide (and avoid passing
docs without openai_file_id to vector_store_crud.update_batch); update the
function signature and callers accordingly (refer to upload_files,
create_collection.execute_setup_job, and vector_store_crud.update_batch) so
callers handle the returned error info or the propagated exception.
Summary
Target issue is #798
Notes
Documentation
New Features
Refactor