Quickrefactor for response type flag

This commit is contained in:
Julius Unverfehrt 2022-03-02 13:15:12 +01:00
parent a6beeeec10
commit d08f2d5d77
3 changed files with 37 additions and 36 deletions

View File

@ -6,38 +6,38 @@ The Infrastructure expects to be deployed in the same Pod / local environment as
A configuration is located in `/config.yaml`. All relevant variables can be configured via exporting environment variables.
| Environment Variable | Default | Description |
|-------------------------------|--------------------------------|--------------------------------------------------------------------------------|
| _service_ | | |
| LOGGING_LEVEL_ROOT | DEBUG | Logging level for service logger |
| RESPONSE_AS_FILE | False | Whether the response is stored as file on storage or sent as stream |
| RESPONSE_FILE_EXTENSION | ".NER_ENTITIES.json.gz" | Extension to the file that stores the analyized response on storage |
| _probing_webserver_ | | |
| PROBING_WEBSERVER_HOST | "0.0.0.0" | Probe webserver address |
| PROBING_WEBSERVER_PORT | 8080 | Probe webserver port |
| PROBING_WEBSERVER_MODE | production | Webserver mode: {development, production} |
| _rabbitmq_ | | |
| RABBITMQ_HOST | localhost | RabbitMQ host address |
| RABBITMQ_PORT | 5672 | RabbitMQ host port |
| RABBITMQ_USERNAME | user | RabbitMQ username |
| RABBITMQ_PASSWORD | bitnami | RabbitMQ password |
| RABBITMQ_HEARTBEAT | 7200 | Controls AMQP heartbeat timeout in seconds |
| _queues_ | | |
| REQUEST_QUEUE | request_queue | Requests to service |
| RESPONSE_QUEUE | response_queue | Responses by service |
| DEAD_LETTER_QUEUE | dead_letter_queue | Messages that failed to process |
| _callback_ | | |
| RETRY | False | Toggles retry behaviour |
| MAX_ATTEMPTS | 3 | Number of times a message may fail before being published to dead letter queue |
| ANALYSIS_ENDPOINT | "http://127.0.0.1:5000" | |
| _storage_ | | |
| STORAGE_BACKEND | s3 | The type of storage to use {s3, azure} |
| STORAGE_BUCKET | "pyinfra-test-bucket" | The bucket / container to pull files specified in queue requests from |
| TARGET_FILE_EXTENSION | ".TEXT.json.gz" | Defines type of file to pull from storage: .TEXT.json.gz or .ORIGIN.pdf.gz |
| STORAGE_ENDPOINT | "http://127.0.0.1:9000" | |
| STORAGE_KEY | | |
| STORAGE_SECRET | | |
| STORAGE_AZURECONNECTIONSTRING | "DefaultEndpointsProtocol=..." | |
| Environment Variable | Default | Description |
|-------------------------------|--------------------------------|--------------------------------------------------------------------------------------------------|
| _service_ | | |
| LOGGING_LEVEL_ROOT | DEBUG | Logging level for service logger |
| RESPONSE_TYPE | "stream" | Whether the analysis response is stored as file on storage or sent as stream: "file" or "stream" |
| RESPONSE_FILE_EXTENSION | ".NER_ENTITIES.json.gz" | Extension to the file that stores the analyized response on storage |
| _probing_webserver_ | | |
| PROBING_WEBSERVER_HOST | "0.0.0.0" | Probe webserver address |
| PROBING_WEBSERVER_PORT | 8080 | Probe webserver port |
| PROBING_WEBSERVER_MODE | production | Webserver mode: {development, production} |
| _rabbitmq_ | | |
| RABBITMQ_HOST | localhost | RabbitMQ host address |
| RABBITMQ_PORT | 5672 | RabbitMQ host port |
| RABBITMQ_USERNAME | user | RabbitMQ username |
| RABBITMQ_PASSWORD | bitnami | RabbitMQ password |
| RABBITMQ_HEARTBEAT | 7200 | Controls AMQP heartbeat timeout in seconds |
| _queues_ | | |
| REQUEST_QUEUE | request_queue | Requests to service |
| RESPONSE_QUEUE | response_queue | Responses by service |
| DEAD_LETTER_QUEUE | dead_letter_queue | Messages that failed to process |
| _callback_ | | |
| RETRY | False | Toggles retry behaviour |
| MAX_ATTEMPTS | 3 | Number of times a message may fail before being published to dead letter queue |
| ANALYSIS_ENDPOINT | "http://127.0.0.1:5000" | |
| _storage_ | | |
| STORAGE_BACKEND | s3 | The type of storage to use {s3, azure} |
| STORAGE_BUCKET | "pyinfra-test-bucket" | The bucket / container to pull files specified in queue requests from |
| TARGET_FILE_EXTENSION | ".TEXT.json.gz" | Defines type of file to pull from storage: .TEXT.json.gz or .ORIGIN.pdf.gz |
| STORAGE_ENDPOINT | "http://127.0.0.1:9000" | |
| STORAGE_KEY | | |
| STORAGE_SECRET | | |
| STORAGE_AZURECONNECTIONSTRING | "DefaultEndpointsProtocol=..." | |
## Response Format

View File

@ -1,7 +1,7 @@
service:
logging_level: $LOGGING_LEVEL_ROOT|DEBUG # Logging level for service logger
response:
save: $RESPONSE_AS_FILE|False # Whether the response is stored as file on storage or sent as stream
type: $RESPONSE_TYPE|"stream" # Whether the analysis response is stored as file on storage or sent as stream
extension: $RESPONSE_FILE_EXTENSION|".NER_ENTITIES.json.gz" # {.IMAGE_INFO.json.gz | .NER_ENTITIES.json.gz}
probing_webserver:

View File

@ -63,16 +63,17 @@ def make_callback_for_output_queue(json_wrapped_body_processor, output_queue_nam
dossier_id, file_id, result = json_wrapped_body_processor(body)
# TODO Unify analysis Repsonse for image-prediction and ner-prediction
if not CONFIG.service.response.save:
if CONFIG.service.response.type == "stream":
result = {"dossierId": dossier_id, "fileId": file_id, "imageMetadata": result}
result = json.dumps(result)
else:
elif CONFIG.service.response.type == "file":
result = json.dumps(result)
upload_compressed_response(
get_storage(CONFIG.storage.backend), CONFIG.storage.bucket, dossier_id, file_id, result
)
result = json.dumps({"dossierId": dossier_id, "fileId": file_id})
else:
result = "WRONG WRONG WORNG DONG"
channel.basic_publish(exchange="", routing_key=output_queue_name, body=result)
channel.basic_ack(delivery_tag=method.delivery_tag)