Compare commits

...

60 Commits

Author SHA1 Message Date
Andrei Isvoran
559a0b5e56 RED-7738 - Fix rollback issue and treat non-manual redaction removal in a Transactional block 2023-10-10 15:33:43 +03:00
Corina Olariu
004923bdea Merge branch 'RED-7185-clone3.6' into 'release/1.363.x'
RED-7185 - RM-46 - Error message when adjusting the Justification

See merge request redactmanager/persistence-service!166
2023-10-09 13:16:22 +02:00
Corina Olariu
13ed7a3a8a RED-7185 - RM-46 - Error message when adjusting the Justification
- validate the name and description of dossier template when cloning
- check for null for dossier template's name
2023-10-09 12:36:15 +03:00
Kilian Schüttler
6724483759 Merge branch 'RED-7326-filesize' into 'release/1.363.x'
RED-7326 - Backport fileSize fix

See merge request redactmanager/persistence-service!159
2023-10-05 16:28:46 +02:00
Andrei Isvoran
829132199e RED-7326 - Backport fileSize fix 2023-10-05 16:28:46 +02:00
Andrei Isvoran
9a3921ed55 Merge branch 'RED-7694-backport' into 'release/1.363.x'
RED-7694 - Backport license fix for startDate

See merge request redactmanager/persistence-service!157
2023-10-05 11:37:42 +02:00
Andrei Isvoran
ac58eb8e40 RED-7694 - Backport license fix for startDate 2023-10-05 12:33:07 +03:00
Andrei Isvoran
a01ec605b2 Merge branch 'RED-7694' into 'release/1.363.x'
RED-7694 - Backport Fix invalid date period

See merge request redactmanager/persistence-service!154
2023-10-04 11:37:31 +02:00
Andrei Isvoran
9e5bd8cfef RED-7694 - Backport Fix invalid date period 2023-10-04 11:37:31 +02:00
Corina Olariu
bf625c1457 Merge branch 'RED-7185-adjustLimit3.6' into 'release/1.363.x'
RED-7185 - RM-46 - Error message when adjusting the Justification

See merge request redactmanager/persistence-service!153
2023-10-04 11:24:10 +02:00
Corina Olariu
807da0a44a RED-7185 - RM-46 - Error message when adjusting the Justification
- add transactional on manual changes so the rollback can take place if comment is too long
2023-10-03 11:17:58 +03:00
Corina Olariu
9cf2bbf7ca RED-7185 - RM-46 - Error message when adjusting the Justification
- permit only comments with length <= 4000 characters
2023-10-02 10:01:34 +03:00
Corina Olariu
fe84f2ef04 Merge branch 'RED-7185-comments' into 'release/1.363.x'
RED-7185 - Fix comment too long

See merge request redactmanager/persistence-service!139
2023-09-25 13:17:43 +02:00
Andrei Isvoran
96e1bc3811 RED-7185 - Fix comment too long 2023-09-22 17:13:44 +03:00
Andrei Isvoran
8ce8e5a6fc Merge branch 'RED-7185' into 'release/1.363.x'
RED-7185 - Error message when adjusting the Justification

See merge request redactmanager/persistence-service!129
2023-09-22 11:41:56 +02:00
Andrei Isvoran
f0caa836d4 RED-7185 - Error message when adjusting the Justification 2023-09-22 11:41:56 +02:00
Andrei Isvoran
82a5023554 Merge branch 'RED-7326-backport' into 'release/1.363.x'
RED-7326 - Backport license storage implementation

See merge request redactmanager/persistence-service!133
2023-09-22 08:33:16 +02:00
Andrei Isvoran
dbb1f21a52 RED-7326 - Backport license storage implementation 2023-09-22 08:33:16 +02:00
Timo Bejan
9786cc305d Quartz test 2023-06-28 12:52:58 +03:00
deiflaender
d5d3770046 hotfix: test banner 2023-06-28 11:42:27 +02:00
Christoph Schabert
2b9e12cba0 Fix pom Verison 2023-06-28 09:45:43 +02:00
Dominique Eifländer
3b05f72345 Merge branch 'RED-6912-ignorecase3.6' into 'release/1.363.x'
RED-6912- Entries not sorted correctly in the dossier dictionary - backport to 3.6

See merge request redactmanager/persistence-service!29
2023-06-26 11:18:34 +02:00
Corina Olariu
7b2ccaea84 RED-6912- Entries not sorted correctly in the dossier dictionary - backport to 3.6
- update the sorting to be case-insensitive
- update junit tests
2023-06-26 10:41:18 +03:00
Corina Olariu
b3ed67450f Merge branch 'RED-6912-backport3.6' into 'release/1.363.x'
RED-6912: Entries not sorted correctly in the dossier dictionary

See merge request redactmanager/persistence-service!18
2023-06-22 11:44:46 +02:00
Corina Olariu
a0c7d88767 RED-6912 - Entries not sorted correctly in the dossier dictionary -backport to 3.6.2
- sort entries and falsePositives and falseRecommendations in endpoint getDictionaryForType
- add junit tests
2023-06-22 11:06:46 +03:00
Kevin Tumma
0fa11b4301 Adjust to new CI 2023-06-22 09:53:17 +02:00
Dominique Eiflaender
05212581bd Pull request #702: RED-6860: Fixed transaction timeout and changed s3 upload to multipart to be able to handle downloads > 5 gb
Merge in RED/persistence-service from RED-6860-3.6 to release/1.363.x

* commit '14f04fdc34f876f71d2d68fe96b5e327878efcb6':
  RED-6860: Fixed transaction timeout and changed s3 upload to multipart to be able to handle downloads > 5 gb
2023-06-01 09:27:38 +02:00
deiflaender
14f04fdc34 RED-6860: Fixed transaction timeout and changed s3 upload to multipart to be able to handle downloads > 5 gb 2023-06-01 09:22:00 +02:00
Christoph Schabert
86b831eea4 pom.xml edited online with Bitbucket 2023-05-25 10:08:11 +02:00
Christoph Schabert
74b6f4fd6f pom.xml edited online with Bitbucket 2023-05-25 10:06:31 +02:00
Christoph Schabert
a0cbc167ab pom.xml edited online with Bitbucket 2023-05-25 10:05:36 +02:00
Christoph Schabert
c329c8935d update platform-docker-dependency with knecon fix 2023-05-25 10:04:34 +02:00
Viktor Seifert
e5315071c7 Pull request #698: RED-6777: Update build config to new domain name
Merge in RED/persistence-service from RED-6777-3.6 to release/1.363.x

* commit '3af8a3a9d20604cd3cb3372bb9eff63415bc7b1a':
  RED-6777: Update build config to new domain name
2023-05-23 14:38:29 +02:00
Viktor Seifert
3af8a3a9d2 RED-6777: Update build config to new domain name 2023-05-23 13:15:32 +02:00
Viktor Seifert
79ba88162b Pull request #697: RED-6777: Reimplemented deletion of dictionary entries as a batch process to avoid a limitation in the Postgres JDBC driver
Merge in RED/persistence-service from RED-6777-3.6 to release/1.363.x

* commit '71a11fc24afe8d80bc1f32a0f04e06cabec8cf39':
  RED-6777: Reimplemented deletion of dictionary entries as a batch process to avoid a limitation in the Postgres JDBC driver
2023-05-23 12:59:38 +02:00
Viktor Seifert
71a11fc24a RED-6777: Reimplemented deletion of dictionary entries as a batch process to avoid a limitation in the Postgres JDBC driver
(change backported from master)
2023-05-23 12:28:09 +02:00
Viktor Seifert
1cb2cc1700 Pull request #676: RED-6467: Backport of change from 4.0.
Merge in RED/persistence-service from RED-6467-3.6 to release/1.363.x

* commit 'cf0c2d6100d28979771a1e26cc628d752096c792':
  RED-6467: Backport of change from 4.0.
2023-04-24 10:28:43 +02:00
Viktor Seifert
cf0c2d6100 RED-6467: Backport of change from 4.0.
* Implemented undeletion of dictionary entries by running a native query in chunks.
This avoids a limitation in the JDBC driver.
* Changed unique name check to not use Exceptions to prevent transaction rollbacks
2023-04-21 16:53:04 +02:00
Viktor Seifert
9742afe175 Pull request #652: RED-6497
Merge in RED/persistence-service from RED-6497 to release/1.363.x

* commit '3fc9dc5132b5eaeb1d8bd609acb57bacc62237d5':
  RED-6497: Removed alternate file size check because it offers less error reporting capabilities
  RED-6497: Fixed issue where the file-size of download was saved incorrectly.
  RED-6497: Changed code to use a different method of getting a file size to have better error reporting
  RED-6497: Removed explicit variable initialization because its marked as a checkstyle violation.
  RED-6497: Corrected handling of the stream, temp-file and exceptions.
2023-03-31 09:47:47 +02:00
Viktor Seifert
3fc9dc5132 Merge branch 'release/1.363.x' into RED-6497 2023-03-30 17:45:05 +02:00
Viktor Seifert
6a3db677ef RED-6497: Removed alternate file size check because it offers less error reporting capabilities 2023-03-30 17:37:18 +02:00
Viktor Seifert
6f203f07e1 RED-6497: Fixed issue where the file-size of download was saved incorrectly.
* Changed the update to the DownloadStatusEntity to use the entity object instead of using a query on JPA-repo.  This prevents values from the object being incorrectly inserted otherwise.
* Extended the DownloadPreparationTest to check the resulting download state and file size.
2023-03-30 17:30:16 +02:00
Corina Olariu
4051d6b3e1 Pull request #651: RED-6265 - Bulk dossier stats endpoint does not return dossier stats of dossiers, for which the current dossier does not have access permissions
Merge in RED/persistence-service from bugfix/RED-6265-3.6.0 to release/1.363.x

* commit '08965675999b4309147a216d362c433d4c08381b':
  RED-6265 - Bulk dossier stats endpoint does not return dossier stats of dossiers, for which the current dossier does not have access permissions
2023-03-30 15:47:44 +02:00
Viktor Seifert
284738b59e RED-6497: Changed code to use a different method of getting a file size to have better error reporting 2023-03-30 15:43:07 +02:00
devplant
0896567599 RED-6265 - Bulk dossier stats endpoint does not return dossier stats of dossiers, for which the current dossier does not have access permissions
- remove deprecated
2023-03-30 16:12:54 +03:00
Viktor Seifert
65884ce42b RED-6497: Removed explicit variable initialization because its marked as a checkstyle violation. 2023-03-30 14:20:55 +02:00
Viktor Seifert
d74b8bf4ba RED-6497: Corrected handling of the stream, temp-file and exceptions.
* Corrected the order of closing the steam and deleting the file.
* Switched the output of exception to the WARN level, since the hint at potential problems.
* Corrected the tests so that they clean-up resources.
* Changed the tests to use an archiver that re-throws exceptions, so that problems are not swallowed in testing.
2023-03-29 18:36:48 +02:00
Dominique Eiflaender
9930430bcd Pull request #641: RED-6503: Do not override ocred pages on ocr successful
Merge in RED/persistence-service from RED-6503-3.6 to release/1.363.x

* commit '12d7eae288b87bd88ecbe66bf08e31a9c6953461':
  RED-6503: Do not override ocred pages on ocr successful
2023-03-28 09:16:58 +02:00
deiflaender
12d7eae288 RED-6503: Do not override ocred pages on ocr successful 2023-03-27 13:49:28 +02:00
Viktor Seifert
cfa9dfcc6f Pull request #624: RED-6310 3.6
Merge in RED/persistence-service from RED-6310-3.6 to release/1.363.x

* commit '18ee468ca15ac07ff8ec6adc2712ccf42129df91':
  RED-6310: Changed element-collection fetch to eager because lazy loading runs into timing based errors
  RED-6310: Updated test for download preparation test to execute all preparation steps.
2023-03-13 16:56:01 +01:00
Viktor Seifert
18ee468ca1 RED-6310: Changed element-collection fetch to eager because lazy loading runs into timing based errors 2023-03-13 16:07:04 +01:00
Viktor Seifert
b9345110aa RED-6310: Updated test for download preparation test to execute all preparation steps.
Previously the last step was not executed.
2023-03-10 17:39:53 +01:00
Viktor Seifert
c2cf764b56 Pull request #618: RED-6310 3.6
Merge in RED/persistence-service from RED-6310-3.6 to release/1.363.x

* commit 'c698bf3a685dc906ce0e811f157122cb62caa93b':
  RED-6310: Backported test to Junit 4
  RED-6310: Moved code to create user-preferences to a separate class so that the calling code can handle a persistence exception
  RED-6310: Corrected services so that they use the user id instead of wrongly using the entity id
  RED-6310: Moved code for multithreaded tests to a helper class
  RED-6310: Removed not needed code from test
  RED-6310: Added setup of tenant id to fix the service test
  RED-6310: Added tests to check if concurrent access to notification-preferences works
2023-03-10 09:58:33 +01:00
Viktor Seifert
c698bf3a68 RED-6310: Backported test to Junit 4 2023-03-09 15:56:32 +01:00
Viktor Seifert
222e5915ed RED-6310: Moved code to create user-preferences to a separate class so that the calling code can handle a persistence exception
(cherry picked from commit 45bd8e600328ce85c8457525b605da2c16dd70c8)
2023-03-09 15:42:44 +01:00
Viktor Seifert
657a616093 RED-6310: Corrected services so that they use the user id instead of wrongly using the entity id
(cherry picked from commit 643ffc8723d923a7ede8ee52e25ea40e25045148)
2023-03-09 15:42:43 +01:00
Viktor Seifert
551840a46f RED-6310: Moved code for multithreaded tests to a helper class
(cherry picked from commit 40d6961742436fd4f7c5e07616850d8b1eb31a43)
2023-03-09 15:42:42 +01:00
Viktor Seifert
a0e0aadb61 RED-6310: Removed not needed code from test
(cherry picked from commit fd7f39bc7e4b4cc67ed55d5e77ef84d93779b342)
2023-03-09 15:42:41 +01:00
Viktor Seifert
d42d56e7f1 RED-6310: Added setup of tenant id to fix the service test
(cherry picked from commit b1d79970c033822ddffb47be199c379cada2ce40)
2023-03-09 15:42:39 +01:00
Viktor Seifert
bbc58379f4 RED-6310: Added tests to check if concurrent access to notification-preferences works
(cherry picked from commit ef36f5c10f21042ce74621dd8e48a754b6a7258f)
2023-03-09 15:42:38 +01:00
72 changed files with 1320 additions and 792 deletions

6
.gitlab-ci.yml Normal file
View File

@ -0,0 +1,6 @@
variables:
SONAR_PROJECT_KEY: 'RED_persistence-service'
include:
- project: 'gitlab/gitlab'
ref: 'main'
file: 'ci-templates/maven_java.yml'

View File

@ -1,37 +0,0 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.atlassian.bamboo</groupId>
<artifactId>bamboo-specs-parent</artifactId>
<version>8.1.3</version>
<relativePath/>
</parent>
<artifactId>bamboo-specs</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>jar</packaging>
<dependencies>
<dependency>
<groupId>com.atlassian.bamboo</groupId>
<artifactId>bamboo-specs-api</artifactId>
</dependency>
<dependency>
<groupId>com.atlassian.bamboo</groupId>
<artifactId>bamboo-specs</artifactId>
</dependency>
<!-- Test dependencies -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<!-- run 'mvn test' to perform offline validation of the plan -->
<!-- run 'mvn -Ppublish-specs' to upload the plan to your Bamboo server -->
</project>

View File

@ -1,126 +0,0 @@
package buildjob;
import static com.atlassian.bamboo.specs.builders.task.TestParserTask.createJUnitParserTask;
import java.time.LocalTime;
import com.atlassian.bamboo.specs.api.BambooSpec;
import com.atlassian.bamboo.specs.api.builders.BambooKey;
import com.atlassian.bamboo.specs.api.builders.Variable;
import com.atlassian.bamboo.specs.api.builders.docker.DockerConfiguration;
import com.atlassian.bamboo.specs.api.builders.permission.PermissionType;
import com.atlassian.bamboo.specs.api.builders.permission.Permissions;
import com.atlassian.bamboo.specs.api.builders.permission.PlanPermissions;
import com.atlassian.bamboo.specs.api.builders.plan.Job;
import com.atlassian.bamboo.specs.api.builders.plan.Plan;
import com.atlassian.bamboo.specs.api.builders.plan.PlanIdentifier;
import com.atlassian.bamboo.specs.api.builders.plan.Stage;
import com.atlassian.bamboo.specs.api.builders.plan.branches.BranchCleanup;
import com.atlassian.bamboo.specs.api.builders.plan.branches.PlanBranchManagement;
import com.atlassian.bamboo.specs.api.builders.project.Project;
import com.atlassian.bamboo.specs.builders.task.CheckoutItem;
import com.atlassian.bamboo.specs.builders.task.InjectVariablesTask;
import com.atlassian.bamboo.specs.builders.task.ScriptTask;
import com.atlassian.bamboo.specs.builders.task.VcsCheckoutTask;
import com.atlassian.bamboo.specs.builders.task.VcsTagTask;
import com.atlassian.bamboo.specs.builders.trigger.BitbucketServerTrigger;
import com.atlassian.bamboo.specs.builders.trigger.ScheduledTrigger;
import com.atlassian.bamboo.specs.model.task.InjectVariablesScope;
import com.atlassian.bamboo.specs.model.task.ScriptTaskProperties.Location;
import com.atlassian.bamboo.specs.util.BambooServer;
/**
* Plan configuration for Bamboo.
* Learn more on: <a href="https://confluence.atlassian.com/display/BAMBOO/Bamboo+Specs">https://confluence.atlassian.com/display/BAMBOO/Bamboo+Specs</a>
*/
@BambooSpec
public class PlanSpec {
private static final String SERVICE_NAME = "persistence-service";
private static final String SERVICE_KEY = SERVICE_NAME.toUpperCase().replaceAll("-", "");
/**
* Run main to publish plan on Bamboo
*/
public static void main(final String[] args) throws Exception {
//By default, credentials are read from the '.credentials' file.
BambooServer bambooServer = new BambooServer("http://localhost:8085");
Plan buildPlan = new PlanSpec().createPlanBuild();
bambooServer.publish(buildPlan);
PlanPermissions buildPlanPermission = new PlanSpec().createPlanPermission(buildPlan.getIdentifier());
bambooServer.publish(buildPlanPermission);
Plan secPlan = new PlanSpec().createSecBuild();
bambooServer.publish(secPlan);
PlanPermissions secPlanPermission = new PlanSpec().createPlanPermission(secPlan.getIdentifier());
bambooServer.publish(secPlanPermission);
}
private PlanPermissions createPlanPermission(PlanIdentifier planIdentifier) {
Permissions permission = new Permissions().userPermissions("atlbamboo",
PermissionType.EDIT,
PermissionType.VIEW,
PermissionType.ADMIN,
PermissionType.CLONE,
PermissionType.BUILD)
.groupPermissions("development", PermissionType.EDIT, PermissionType.VIEW, PermissionType.CLONE, PermissionType.BUILD)
.groupPermissions("devplant", PermissionType.EDIT, PermissionType.VIEW, PermissionType.CLONE, PermissionType.BUILD)
.loggedInUserPermissions(PermissionType.VIEW)
.anonymousUserPermissionView();
return new PlanPermissions(planIdentifier.getProjectKey(), planIdentifier.getPlanKey()).permissions(permission);
}
private Project project() {
return new Project().name("RED").key(new BambooKey("RED"));
}
public Plan createPlanBuild() {
return new Plan(project(), SERVICE_NAME, new BambooKey(SERVICE_KEY)).description("Build Plan for Persitence Service")
.variables(new Variable("maven_add_param", ""))
.stages(new Stage("Default Stage").jobs(new Job("Default Job", new BambooKey("JOB1")).tasks(new ScriptTask().description("Clean")
.inlineBody("#!/bin/bash\n" + "set -e\n" + "rm -rf ./*"),
new VcsCheckoutTask().description("Checkout Default Repository").cleanCheckout(true).checkoutItems(new CheckoutItem().defaultRepository()),
new ScriptTask().description("Build").location(Location.FILE).fileFromPath("bamboo-specs/src/main/resources/scripts/build-java.sh").argument(SERVICE_NAME),
createJUnitParserTask().description("Resultparser")
.resultDirectories("**/test-reports/*.xml, **/target/surefire-reports/*.xml, **/target/failsafe-reports/*.xml")
.enabled(true),
new InjectVariablesTask().description("Inject git Tag").path("git.tag").namespace("g").scope(InjectVariablesScope.LOCAL),
new VcsTagTask().description("${bamboo.g.gitTag}").tagName("${bamboo.g.gitTag}").defaultRepository())
.dockerConfiguration(new DockerConfiguration().image("nexus.iqser.com:5001/infra/maven:3.8.4-openjdk-17-slim")
.dockerRunArguments("--net=host")
.volume("/etc/maven/settings.xml", "/usr/share/maven/ref/settings.xml")
.volume("/var/run/docker.sock", "/var/run/docker.sock"))))
.linkedRepositories("RED / " + SERVICE_NAME)
.triggers(new BitbucketServerTrigger())
.planBranchManagement(new PlanBranchManagement().createForVcsBranch()
.delete(new BranchCleanup().whenInactiveInRepositoryAfterDays(14))
.notificationForCommitters());
}
public Plan createSecBuild() {
return new Plan(project(), SERVICE_NAME + "-Sec", new BambooKey(SERVICE_KEY + "SEC")).description("Security Analysis Plan")
.stages(new Stage("Default Stage").jobs(new Job("Default Job", new BambooKey("JOB1")).tasks(new ScriptTask().description("Clean")
.inlineBody("#!/bin/bash\n" + "set -e\n" + "rm -rf ./*"),
new VcsCheckoutTask().description("Checkout Default Repository").cleanCheckout(true).checkoutItems(new CheckoutItem().defaultRepository()),
new ScriptTask().description("Sonar").location(Location.FILE).fileFromPath("bamboo-specs/src/main/resources/scripts/sonar-java.sh").argument(SERVICE_NAME))
.dockerConfiguration(new DockerConfiguration().image("nexus.iqser.com:5001/infra/maven:3.8.4-openjdk-17-slim")
.dockerRunArguments("--net=host")
.volume("/etc/maven/settings.xml", "/usr/share/maven/conf/settings.xml")
.volume("/var/run/docker.sock", "/var/run/docker.sock"))))
.linkedRepositories("RED / " + SERVICE_NAME)
.triggers(new ScheduledTrigger().scheduleOnceDaily(LocalTime.of(23, 00)))
.planBranchManagement(new PlanBranchManagement().createForVcsBranchMatching("release.*").notificationForCommitters());
}
}

View File

@ -1,61 +0,0 @@
#!/bin/bash
set -e
SERVICE_NAME=$1
set SERVER_PORT=$(shuf -i 20000-65000 -n 1)
if [[ "$bamboo_planRepository_branchName" == "master" ]]
then
branchVersion=$(cat pom.xml | grep -Eo " <version>.*-SNAPSHOT</version>" | sed -s 's|<version>\(.*\)\..*\(-*.*\)</version>|\1|' | tr -d ' ')
latestVersion=$( semver $(git tag -l "${branchVersion}.*" ) | tail -n1 )
newVersion="$(semver $latestVersion -p -i minor)"
elif [[ "$bamboo_planRepository_branchName" == release* ]]
then
branchVersion=$(echo $bamboo_planRepository_branchName | sed -s 's|release\/\([0-9]\+\.[0-9]\+\)\.x|\1|')
latestVersion=$( semver $(git tag -l "${branchVersion}.*" ) | tail -n1 )
newVersion="$(semver $latestVersion -p -i patch)"
elif [[ "${bamboo_version_tag}" != "dev" ]]
then
newVersion="${bamboo_version_tag}"
else
mvn -f ${bamboo_build_working_directory}/$SERVICE_NAME-v1/pom.xml \
--no-transfer-progress \
${bamboo_maven_add_param} \
clean install \
-Djava.security.egd=file:/dev/./urandomelse
echo "gitTag=${bamboo_planRepository_1_branch}_${bamboo_buildNumber}" > git.tag
exit 0
fi
echo "gitTag=${newVersion}" > git.tag
mvn --no-transfer-progress \
-f ${bamboo_build_working_directory}/$SERVICE_NAME-v1/pom.xml \
versions:set \
-DnewVersion=${newVersion}
mvn --no-transfer-progress \
-f ${bamboo_build_working_directory}/$SERVICE_NAME-image-v1/pom.xml \
versions:set \
-DnewVersion=${newVersion}
mvn -f ${bamboo_build_working_directory}/$SERVICE_NAME-v1/pom.xml \
--no-transfer-progress \
clean deploy \
${bamboo_maven_add_param} \
-e \
-DdeployAtEnd=true \
-Dmaven.wagon.http.ssl.insecure=true \
-Dmaven.wagon.http.ssl.allowall=true \
-Dmaven.wagon.http.ssl.ignore.validity.dates=true \
-DaltDeploymentRepository=iqser_release::default::https://nexus.iqser.com/repository/red-platform-releases
mvn --no-transfer-progress \
-f ${bamboo_build_working_directory}/$SERVICE_NAME-image-v1/pom.xml \
package
mvn --no-transfer-progress \
-f ${bamboo_build_working_directory}/$SERVICE_NAME-image-v1/pom.xml \
docker:push

View File

@ -1,45 +0,0 @@
#!/bin/bash
set -e
SERVICE_NAME=$1
set SERVER_PORT=$(shuf -i 20000-65000 -n 1)
echo "build jar binaries"
mvn -f ${bamboo_build_working_directory}/$SERVICE_NAME-v1/pom.xml \
--no-transfer-progress \
clean install \
-Djava.security.egd=file:/dev/./urandomelse
echo "dependency-check:aggregate"
mvn --no-transfer-progress \
-f ${bamboo_build_working_directory}/$SERVICE_NAME-v1/pom.xml \
org.owasp:dependency-check-maven:aggregate
if [[ -z "${bamboo_repository_pr_key}" ]]
then
echo "Sonar Scan for branch: ${bamboo_planRepository_1_branch}"
mvn --no-transfer-progress \
-f ${bamboo_build_working_directory}/$SERVICE_NAME-v1/pom.xml \
sonar:sonar \
-Dsonar.projectKey=RED_$SERVICE_NAME \
-Dsonar.host.url=https://sonarqube.iqser.com \
-Dsonar.login=${bamboo_sonarqube_api_token_secret} \
-Dsonar.branch.name=${bamboo_planRepository_1_branch} \
-Dsonar.dependencyCheck.jsonReportPath=target/dependency-check-report.json \
-Dsonar.dependencyCheck.xmlReportPath=target/dependency-check-report.xml \
-Dsonar.dependencyCheck.htmlReportPath=target/dependency-check-report.html
else
echo "Sonar Scan for PR with key1: ${bamboo_repository_pr_key}"
mvn --no-transfer-progress \
-f ${bamboo_build_working_directory}/$SERVICE_NAME-v1/pom.xml \
sonar:sonar \
-Dsonar.projectKey=RED_$SERVICE_NAME \
-Dsonar.host.url=https://sonarqube.iqser.com \
-Dsonar.login=${bamboo_sonarqube_api_token_secret} \
-Dsonar.pullrequest.key=${bamboo_repository_pr_key} \
-Dsonar.pullrequest.branch=${bamboo_repository_pr_sourceBranch} \
-Dsonar.pullrequest.base=${bamboo_repository_pr_targetBranch} \
-Dsonar.dependencyCheck.jsonReportPath=target/dependency-check-report.json \
-Dsonar.dependencyCheck.xmlReportPath=target/dependency-check-report.xml \
-Dsonar.dependencyCheck.htmlReportPath=target/dependency-check-report.html
fi

View File

@ -1,21 +0,0 @@
package buildjob;
import org.junit.Test;
import com.atlassian.bamboo.specs.api.builders.plan.Plan;
import com.atlassian.bamboo.specs.api.exceptions.PropertiesValidationException;
import com.atlassian.bamboo.specs.api.util.EntityPropertiesBuilders;
public class PlanSpecTest {
@Test
public void checkYourPlanOffline() throws PropertiesValidationException {
Plan buildPlan = new PlanSpec().createPlanBuild();
EntityPropertiesBuilders.build(buildPlan);
Plan secPlan = new PlanSpec().createSecBuild();
EntityPropertiesBuilders.build(secPlan);
}
}

View File

@ -3,9 +3,9 @@
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<groupId>com.iqser.red</groupId>
<groupId>com.knecon.fforesight</groupId>
<artifactId>platform-docker-dependency</artifactId>
<version>1.2.0</version>
<version>0.1.0</version>
<relativePath/>
</parent>
<modelVersion>4.0.0</modelVersion>

View File

@ -20,5 +20,6 @@ public class AddFileRequest {
@NonNull
private String dossierId;
private String uploader;
private long fileSize;
}

View File

@ -15,28 +15,21 @@ import lombok.NoArgsConstructor;
@NoArgsConstructor
public class LicenseReport {
private long totalFilesUploadedBytes;
private long activeFilesUploadedBytes;
private long trashFilesUploadedBytes;
private long archivedFilesUploadedBytes;
private int numberOfAnalyzedFiles;
private int numberOfOcrFiles;
private int numberOfDossiers;
private int numberOfAnalyzedPages;
private long analysedFilesBytes;
private int numberOfOcrPages;
private int numberOfAnalyses; // includes reanalysis counts
private Instant startDate;
private Instant endDate;
private int offset;
private int limit;
private List<ReportData> data = new ArrayList<>();
// To be used for consecutive/paged calls
private String requestId;
@Builder.Default
private List<MonthlyReportData> monthlyData = new ArrayList<>();
}

View File

@ -27,19 +27,9 @@ public class LicenseReportRequest {
private List<String> dossierIds = new ArrayList<>();
public Instant getStartDate() {
if (startDate == null) {
startDate = Year.of(getEndDate().atOffset(ZoneOffset.UTC).getYear()).atMonth(1).atDay(1).atStartOfDay().toInstant(ZoneOffset.UTC);
}
return startDate;
}
public Instant getEndDate() {
if (endDate == null) {
if (endDate == null || endDate.isAfter(Instant.now())) {
endDate = Instant.now();
}
return endDate;

View File

@ -0,0 +1,26 @@
package com.iqser.red.service.persistence.service.v1.api.model.license;
import java.time.Instant;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@Builder
@AllArgsConstructor
@NoArgsConstructor
public class MonthlyReportData {
private Instant startDate;
private Instant endDate;
private long totalFilesUploadedBytes;
private long activeFilesUploadedBytes;
private long trashFilesUploadedBytes;
private long archivedFilesUploadedBytes;
private int numberOfAnalyzedPages;
private long analysedFilesBytes;
private int numberOfOcrPages;
}

View File

@ -19,12 +19,10 @@ public interface DossierStatsResource {
String DOSSIER_ID_PATH_PARAM = "/{" + DOSSIER_ID_PARAM + "}";
@Deprecated
@GetMapping(value = REST_PATH + DOSSIER_ID_PATH_PARAM, produces = MediaType.APPLICATION_JSON_VALUE)
DossierStats getDossierStats(@PathVariable(DOSSIER_ID_PARAM) String dossierId);
@Deprecated
@PostMapping(value = REST_PATH, produces = MediaType.APPLICATION_JSON_VALUE)
List<DossierStats> getDossierStats(@RequestBody Set<String> dossierIds);

View File

@ -16,8 +16,6 @@ public interface LicenseReportResource {
@ResponseBody
@ResponseStatus(value = HttpStatus.OK)
@PostMapping(value = "/report/license", consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
LicenseReport getLicenseReport(@RequestBody LicenseReportRequest licenseReportRequest,
@RequestParam(value = "offset", defaultValue = "0") int offset,
@RequestParam(value = "limit", defaultValue = "20") int limit);
LicenseReport getLicenseReport(@RequestBody LicenseReportRequest licenseReportRequest);
}

View File

@ -61,10 +61,10 @@ public class DownloadStatusEntity {
@Column
long fileSize;
@ManyToOne(fetch = FetchType.LAZY)
@ManyToOne(fetch = FetchType.EAGER)
DossierEntity dossier;
@ManyToMany
@ManyToMany(fetch = FetchType.EAGER)
@Fetch(FetchMode.SUBSELECT)
List<FileEntity> files = new ArrayList<>();
@ -73,7 +73,7 @@ public class DownloadStatusEntity {
@Convert(converter = JSONDownloadFileTypeConverter.class)
Set<DownloadFileType> downloadFileTypes = new HashSet<>();
@ManyToMany(fetch = FetchType.LAZY)
@ManyToMany(fetch = FetchType.EAGER)
@Fetch(FetchMode.SUBSELECT)
List<ReportTemplateEntity> reports = new ArrayList<>();

View File

@ -6,6 +6,7 @@ import java.util.List;
import javax.persistence.Column;
import javax.persistence.ElementCollection;
import javax.persistence.Entity;
import javax.persistence.FetchType;
import javax.persistence.Id;
import javax.persistence.Table;
@ -39,11 +40,11 @@ public class NotificationPreferencesEntity {
@Column
private EmailNotificationType emailNotificationType;
@ElementCollection
@ElementCollection(fetch = FetchType.EAGER)
@Fetch(FetchMode.SUBSELECT)
private List<String> emailNotifications = new ArrayList<>();
@ElementCollection
@ElementCollection(fetch = FetchType.EAGER)
@Fetch(FetchMode.SUBSELECT)
private List<String> inAppNotifications = new ArrayList<>();

View File

@ -9,6 +9,7 @@ import java.util.ArrayList;
import java.util.List;
import java.util.UUID;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.BeanUtils;
import org.springframework.stereotype.Service;
@ -63,6 +64,10 @@ public class DossierTemplateCloneService {
public DossierTemplateEntity cloneDossierTemplate(String dossierTemplateId, CloneDossierTemplateRequest cloneDossierTemplateRequest) {
if (StringUtils.isEmpty(cloneDossierTemplateRequest.getName())) {
throw new ConflictException("DossierTemplate name must be set");
}
dossierTemplatePersistenceService.validateDossierTemplate(cloneDossierTemplateRequest.getName(), cloneDossierTemplateRequest.getDescription());
dossierTemplatePersistenceService.validateDossierTemplateNameIsUnique(cloneDossierTemplateRequest.getName());
DossierTemplateEntity clonedDossierTemplate = new DossierTemplateEntity();

View File

@ -17,8 +17,8 @@ import com.iqser.red.service.persistence.management.v1.processor.exception.BadRe
import com.iqser.red.service.persistence.management.v1.processor.exception.NotFoundException;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.DossierRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.DossierTemplateRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.EntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.TypeRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry.EntryRepository;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.type.DictionarySummaryResponse;
import lombok.RequiredArgsConstructor;

View File

@ -32,11 +32,14 @@ public class DossierTemplatePersistenceService {
private final LegalBasisMappingPersistenceService legalBasisMappingPersistenceService;
private final RulesPersistenceService rulesPersistenceService;
private final int MAX_NAME_LENGTH = 255;
private final int MAX_DESCRIPTION_LENGTH = 4000;
@Transactional
public DossierTemplateEntity createOrUpdateDossierTemplate(CreateOrUpdateDossierTemplateRequest createOrUpdateDossierRequest) {
if (createOrUpdateDossierRequest.getDossierTemplateId() != null) {
validateDossierTemplate(createOrUpdateDossierRequest.getName(), createOrUpdateDossierRequest.getDescription());
Optional<DossierTemplateEntity> dossierTemplate = dossierTemplateRepository.findById(createOrUpdateDossierRequest.getDossierTemplateId());
if (dossierTemplate.isPresent()) {
@ -58,6 +61,7 @@ public class DossierTemplatePersistenceService {
throw new ConflictException("DossierTemplate name must be set");
}
validateDossierTemplateNameIsUnique(createOrUpdateDossierRequest.getName());
validateDossierTemplate(createOrUpdateDossierRequest.getName(), createOrUpdateDossierRequest.getDescription());
DossierTemplateEntity dossierTemplate = new DossierTemplateEntity();
dossierTemplate.setId(UUID.randomUUID().toString());
// order is important
@ -73,6 +77,16 @@ public class DossierTemplatePersistenceService {
}
public void validateDossierTemplate(String name, String description) {
if (!StringUtils.isEmpty(name) && name.length() > MAX_NAME_LENGTH) {
throw new BadRequestException(String.format("The name is too long (%s), max length %s", name.length(), MAX_NAME_LENGTH));
}
if (!StringUtils.isEmpty(description) && description.length() > MAX_DESCRIPTION_LENGTH) {
throw new BadRequestException(String.format("The description is too long (%s), max length %s", description.length(), MAX_DESCRIPTION_LENGTH));
}
}
public DossierTemplateStatus computeDossierTemplateStatus(DossierTemplateEntity dossierTemplate) {
@ -96,13 +110,19 @@ public class DossierTemplatePersistenceService {
}
@Transactional
public void validateDossierTemplateNameIsUnique(String templateName) {
getAllDossierTemplates().forEach(existing -> {
if (existing.getName().equals(templateName)) {
throw new ConflictException("DossierTemplate name must be unique");
}
});
if (isDossierTemplateNameNotUnique(templateName)) {
throw new ConflictException("DossierTemplate name must be unique");
}
}
@Transactional
public boolean isDossierTemplateNameNotUnique(String templateName) {
return dossierTemplateRepository.existsByName(templateName);
}

View File

@ -73,9 +73,11 @@ public class DownloadStatusPersistenceService {
@Transactional
public void updateStatus(String storageId, DownloadStatusValue status, long fileSize) {
public void updateStatus(DownloadStatusEntity entity, DownloadStatusValue statusValue, long fileSize) {
downloadStatusRepository.updateStatus(storageId, status, fileSize);
entity.setStatus(statusValue);
entity.setFileSize(fileSize);
downloadStatusRepository.save(entity);
}

View File

@ -12,10 +12,10 @@ import com.iqser.red.service.persistence.management.v1.processor.entity.configur
import com.iqser.red.service.persistence.management.v1.processor.entity.configuration.DictionaryEntryEntity;
import com.iqser.red.service.persistence.management.v1.processor.entity.configuration.DictionaryFalsePositiveEntryEntity;
import com.iqser.red.service.persistence.management.v1.processor.entity.configuration.DictionaryFalseRecommendationEntryEntity;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.EntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.FalsePositiveEntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.FalseRecommendationEntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.TypeRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry.EntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry.FalsePositiveEntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry.FalseRecommendationEntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.utils.jdbc.JDBCWriteUtils;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.type.DictionaryEntryType;
@ -35,18 +35,12 @@ public class EntryPersistenceService {
@Transactional
public void deleteEntries(String typeId, List<String> values, long version, DictionaryEntryType dictionaryEntryType) {
public void deleteEntries(String typeId, Set<String> values, long version, DictionaryEntryType dictionaryEntryType) {
switch (dictionaryEntryType) {
case ENTRY:
entryRepository.deleteAllByTypeIdAndVersionAndValueIn(typeId, version, values);
break;
case FALSE_POSITIVE:
falsePositiveEntryRepository.deleteAllByTypeIdAndVersionAndValueIn(typeId, version, values);
break;
case FALSE_RECOMMENDATION:
falseRecommendationEntryRepository.deleteAllByTypeIdAndVersionAndValueIn(typeId, version, values);
break;
case ENTRY -> entryRepository.deleteAllByTypeIdAndVersionAndValueIn(typeId, values, version);
case FALSE_POSITIVE -> falsePositiveEntryRepository.deleteAllByTypeIdAndVersionAndValueIn(typeId, values, version);
case FALSE_RECOMMENDATION -> falseRecommendationEntryRepository.deleteAllByTypeIdAndVersionAndValueIn(typeId, values, version);
}
}
@ -55,45 +49,29 @@ public class EntryPersistenceService {
public void setVersion(String typeId, long version, DictionaryEntryType dictionaryEntryType) {
switch (dictionaryEntryType) {
case ENTRY:
entryRepository.updateVersionWhereTypeId(version, typeId);
break;
case FALSE_POSITIVE:
falsePositiveEntryRepository.updateVersionWhereTypeId(version, typeId);
break;
case FALSE_RECOMMENDATION:
falseRecommendationEntryRepository.updateVersionWhereTypeId(version, typeId);
break;
case ENTRY -> entryRepository.updateVersionWhereTypeId(version, typeId);
case FALSE_POSITIVE -> falsePositiveEntryRepository.updateVersionWhereTypeId(version, typeId);
case FALSE_RECOMMENDATION -> falseRecommendationEntryRepository.updateVersionWhereTypeId(version, typeId);
}
}
public List<? extends BaseDictionaryEntry> getEntries(String typeId, DictionaryEntryType dictionaryEntryType, Long fromVersion) {
switch (dictionaryEntryType) {
case ENTRY:
return entryRepository.findByTypeIdAndVersionGreaterThan(typeId, fromVersion != null ? fromVersion : -1);
case FALSE_POSITIVE:
return falsePositiveEntryRepository.findByTypeIdAndVersionGreaterThan(typeId, fromVersion != null ? fromVersion : -1);
case FALSE_RECOMMENDATION:
return falseRecommendationEntryRepository.findByTypeIdAndVersionGreaterThan(typeId, fromVersion != null ? fromVersion : -1);
}
return null;
return switch (dictionaryEntryType) {
case ENTRY -> entryRepository.findByTypeIdAndVersionGreaterThan(typeId, fromVersion != null ? fromVersion : -1);
case FALSE_POSITIVE -> falsePositiveEntryRepository.findByTypeIdAndVersionGreaterThan(typeId, fromVersion != null ? fromVersion : -1);
case FALSE_RECOMMENDATION -> falseRecommendationEntryRepository.findByTypeIdAndVersionGreaterThan(typeId, fromVersion != null ? fromVersion : -1);
};
}
public void deleteAllEntriesForTypeId(String typeId, long version, DictionaryEntryType dictionaryEntryType) {
switch (dictionaryEntryType) {
case ENTRY:
entryRepository.deleteAllEntriesForTypeId(typeId, version);
break;
case FALSE_POSITIVE:
falsePositiveEntryRepository.deleteAllEntriesForTypeId(typeId, version);
break;
case FALSE_RECOMMENDATION:
falseRecommendationEntryRepository.deleteAllEntriesForTypeId(typeId, version);
break;
case ENTRY -> entryRepository.deleteAllEntriesForTypeId(typeId, version);
case FALSE_POSITIVE -> falsePositiveEntryRepository.deleteAllEntriesForTypeId(typeId, version);
case FALSE_RECOMMENDATION -> falseRecommendationEntryRepository.deleteAllEntriesForTypeId(typeId, version);
}
}

View File

@ -39,7 +39,7 @@ public class FileStatusPersistenceService {
private final DossierPersistenceService dossierService;
public void createStatus(String dossierId, String fileId, String filename, String uploader) {
public void createStatus(String dossierId, String fileId, String filename, String uploader, long fileSize) {
OffsetDateTime now = OffsetDateTime.now().truncatedTo(ChronoUnit.MILLIS);
FileEntity file = new FileEntity();
@ -55,6 +55,7 @@ public class FileStatusPersistenceService {
file.setLastUpdated(now);
file.setFileManipulationDate(now);
file.setProcessingErrorCounter(0);
file.setFileSize(fileSize);
fileRepository.save(file);
}
@ -65,11 +66,12 @@ public class FileStatusPersistenceService {
if (isFileDeleted(fileId)) {
return;
}
log.info("File " + fileId + " has been optimized with file size " + fileSize);
fileRepository.updateProcessingStatus(fileId,
ProcessingStatus.PRE_PROCESSED,
OffsetDateTime.now().truncatedTo(ChronoUnit.MILLIS),
hasHighlights,
fileSize,
calculateProcessingErrorCounter(fileId, ProcessingStatus.PRE_PROCESSED));
}
@ -290,9 +292,9 @@ public class FileStatusPersistenceService {
}
public List<FileEntity> getStatusesForDossiersAndTimePeriod(Set<String> dossierIds, OffsetDateTime start, OffsetDateTime end) {
public List<FileEntity> getStatusesAddedBefore(OffsetDateTime end) {
return fileRepository.findByDossierIdInAndAddedBetween(dossierIds, start, end);
return fileRepository.findByAddedBefore(end);
}

View File

@ -12,6 +12,7 @@ import org.springframework.stereotype.Service;
import com.iqser.red.service.persistence.management.v1.processor.entity.configuration.LegalBasisEntity;
import com.iqser.red.service.persistence.management.v1.processor.entity.configuration.LegalBasisMappingEntity;
import com.iqser.red.service.persistence.management.v1.processor.exception.BadRequestException;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.LegalBasisMappingRepository;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.legalbasis.LegalBasis;
@ -23,6 +24,8 @@ public class LegalBasisMappingPersistenceService {
private final LegalBasisMappingRepository legalBasisMappingRepository;
private final int MAX_NAME_LENGTH = 255;
private final int MAX_LEGAL_BASIS_LENGTH = 4000;
@Transactional
public void deleteLegalBasis(String dossierTemplateId, List<String> legalBasisNames) {
@ -36,10 +39,10 @@ public class LegalBasisMappingPersistenceService {
}
@Transactional
public void addOrUpdateLegalBasis(String dossierTemplateId, LegalBasis legalBasis) {
validateLegalBasis(legalBasis);
var mapping = getLegalBasisMappingOrCreate(dossierTemplateId);
mapping.getLegalBasis().stream().filter(l -> l.getName().equals(legalBasis.getName())).findAny().ifPresentOrElse(existingBasis -> {
@ -54,6 +57,20 @@ public class LegalBasisMappingPersistenceService {
}
private void validateLegalBasis(LegalBasis legalBasis) {
if (legalBasis.getName().length() > MAX_NAME_LENGTH) {
throw new BadRequestException(String.format("The name is too long (%s), max length %s", legalBasis.getName().length(), MAX_NAME_LENGTH));
}
if (legalBasis.getDescription().length() > MAX_LEGAL_BASIS_LENGTH) {
throw new BadRequestException(String.format("The description is too long (%s), max length %s", legalBasis.getDescription().length(), MAX_LEGAL_BASIS_LENGTH));
}
if (legalBasis.getReason().length() > MAX_LEGAL_BASIS_LENGTH) {
throw new BadRequestException(String.format("The legal basis is too long (%s), max length %s", legalBasis.getReason().length(), MAX_LEGAL_BASIS_LENGTH));
}
}
@Transactional
public void setLegalBasisMapping(String dossierTemplateId, List<LegalBasis> legalBasisMapping) {

View File

@ -37,6 +37,7 @@ public class NotificationPersistenceService {
@SneakyThrows
@Transactional
public void insertNotification(AddNotificationRequest addNotificationRequest) {
var notification = new NotificationEntity();

View File

@ -10,6 +10,8 @@ import java.util.stream.Collectors;
import javax.transaction.Transactional;
import org.springframework.beans.BeanUtils;
import org.springframework.dao.DataIntegrityViolationException;
import org.springframework.stereotype.Component;
import org.springframework.stereotype.Service;
import com.iqser.red.service.persistence.management.v1.processor.entity.notification.NotificationPreferencesEntity;
@ -18,15 +20,20 @@ import com.iqser.red.service.persistence.management.v1.processor.service.persist
import com.iqser.red.service.persistence.service.v1.api.model.notification.NotificationPreferences;
import com.iqser.red.service.persistence.service.v1.api.model.notification.NotificationType;
import lombok.AccessLevel;
import lombok.RequiredArgsConstructor;
import lombok.experimental.FieldDefaults;
@Service
@RequiredArgsConstructor
@FieldDefaults(makeFinal = true, level = AccessLevel.PRIVATE)
public class NotificationPreferencesPersistenceService {
private final NotificationPreferencesRepository notificationPreferencesRepository;
NotificationPreferencesRepository notificationPreferencesRepository;
private final NotificationRepository notificationRepository;
NonThreadSafeNotificationPreferencesRepositoryWrapper notificationPreferencesRepositoryWrapper;
NotificationRepository notificationRepository;
@Transactional
@ -60,31 +67,28 @@ public class NotificationPreferencesPersistenceService {
@Transactional
public void deleteNotificationPreferences(String userId) {
notificationPreferencesRepository.deleteById(userId);
notificationPreferencesRepository.deleteByUserId(userId);
}
@Transactional
// This method intentionally does not have a @Transactional annotation, since it needs to handle an underlying transaction exception.
public NotificationPreferencesEntity getOrCreateNotificationPreferences(String userId) {
return notificationPreferencesRepository.findById(userId).orElseGet(() -> {
var notificationPreference = new NotificationPreferencesEntity();
notificationPreference.setUserId(userId);
notificationPreference.setEmailNotificationsEnabled(false);
notificationPreference.setInAppNotificationsEnabled(true);
notificationPreference.setInAppNotifications(Arrays.stream(NotificationType.values()).map(Enum::name).collect(Collectors.toList()));
return notificationPreferencesRepository.save(notificationPreference);
});
try {
// The method called here will fail if it is called concurrently (more than 1 thread), since it will always try to create
// the desired entity. But the exception only means, that the entity has been created by another thread.
// In that case we can just fetch the data from the db.
return notificationPreferencesRepositoryWrapper.getOrCreateNotificationPreferences(userId);
} catch (DataIntegrityViolationException ex) {
return notificationPreferencesRepository.getByUserId(userId);
}
}
@Transactional
public void initializePreferencesIfNotExists(String userId) {
if (!notificationPreferencesRepository.existsByUserId(userId)) {
getOrCreateNotificationPreferences(userId);
}
getOrCreateNotificationPreferences(userId);
}
@ -93,4 +97,29 @@ public class NotificationPreferencesPersistenceService {
return notificationPreferencesRepository.findAll();
}
@Component
@RequiredArgsConstructor
@FieldDefaults(makeFinal = true, level = AccessLevel.PRIVATE)
private static class NonThreadSafeNotificationPreferencesRepositoryWrapper {
NotificationPreferencesRepository notificationPreferencesRepository;
@Transactional(Transactional.TxType.REQUIRES_NEW)
public NotificationPreferencesEntity getOrCreateNotificationPreferences(String userId) {
return notificationPreferencesRepository.findByUserId(userId).orElseGet(() -> {
var notificationPreference = new NotificationPreferencesEntity();
notificationPreference.setUserId(userId);
notificationPreference.setEmailNotificationsEnabled(false);
notificationPreference.setInAppNotificationsEnabled(true);
notificationPreference.setInAppNotifications(Arrays.stream(NotificationType.values()).map(Enum::name).collect(Collectors.toList()));
return notificationPreferencesRepository.save(notificationPreference);
});
}
}
}

View File

@ -7,11 +7,13 @@ import java.util.stream.Collectors;
import javax.transaction.Transactional;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.BeanUtils;
import org.springframework.stereotype.Service;
import com.iqser.red.service.persistence.management.v1.processor.entity.annotations.AnnotationEntityId;
import com.iqser.red.service.persistence.management.v1.processor.entity.annotations.ManualLegalBasisChangeEntity;
import com.iqser.red.service.persistence.management.v1.processor.exception.BadRequestException;
import com.iqser.red.service.persistence.management.v1.processor.exception.NotFoundException;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.LegalBasisChangeRepository;
import com.iqser.red.service.persistence.service.v1.api.model.annotations.AnnotationStatus;
@ -24,12 +26,13 @@ import lombok.RequiredArgsConstructor;
public class LegalBasisChangePersistenceService {
private final LegalBasisChangeRepository legalBasisChangeRepository;
private final int SECTION_MAX_LENGTH = 1024;
public ManualLegalBasisChangeEntity insert(String fileId, LegalBasisChangeRequest legalBasisChangeRequest) {
ManualLegalBasisChangeEntity manualLegalBasisChange = new ManualLegalBasisChangeEntity();
manualLegalBasisChange.setId(new AnnotationEntityId(legalBasisChangeRequest.getAnnotationId(), fileId));
checkSection(legalBasisChangeRequest.getSection());
BeanUtils.copyProperties(legalBasisChangeRequest, manualLegalBasisChange);
manualLegalBasisChange.setRequestDate(OffsetDateTime.now().truncatedTo(ChronoUnit.MILLIS));
@ -41,6 +44,12 @@ public class LegalBasisChangePersistenceService {
}
private void checkSection(String section) {
if (!StringUtils.isEmpty(section) && section.length() > SECTION_MAX_LENGTH) {
throw new BadRequestException(String.format("The section is too long (%s), max length %s", section.length(), SECTION_MAX_LENGTH));
}
}
@Transactional
public void hardDelete(String fileId, String annotationId) {

View File

@ -17,4 +17,7 @@ public interface DossierTemplateRepository extends JpaRepository<DossierTemplate
@Query("select d from DossierTemplateEntity d where d.id = :dossierTemplateId and d.softDeleteTime is null")
Optional<DossierTemplateEntity> findByIdAndNotDeleted(String dossierTemplateId);
}
boolean existsByName(String name);
}

View File

@ -18,12 +18,7 @@ public interface DownloadStatusRepository extends JpaRepository<DownloadStatusEn
@Modifying
@Query("update DownloadStatusEntity ds set ds.status = :status where ds.storageId = :storageId")
void updateStatus(String storageId, DownloadStatusValue status);
@Modifying
@Query("update DownloadStatusEntity ds set ds.status = :status, ds.fileSize = :fileSize where ds.storageId = :storageId")
void updateStatus(String storageId, DownloadStatusValue status, long fileSize);
@Modifying
@Query("update DownloadStatusEntity ds set ds.lastDownload = :lastDownload where ds.storageId = :storageId")

View File

@ -25,7 +25,7 @@ public interface FileRepository extends JpaRepository<FileEntity, String> {
List<FileEntity> findByDossierId(String dossierId);
List<FileEntity> findByDossierIdInAndAddedBetween(Set<String> dossierIds, OffsetDateTime start, OffsetDateTime end);
List<FileEntity> findByAddedBefore(OffsetDateTime end);
@Modifying
@ -73,8 +73,8 @@ public interface FileRepository extends JpaRepository<FileEntity, String> {
@Modifying(clearAutomatically = true)
@Query("update FileEntity f set f.processingStatus = :processingStatus, f.lastUpdated = :lastUpdated," + " f.hasHighlights = :hasHighlights, f.fileSize = :fileSize, f.processingErrorCounter = :processingErrorCounter " + " where f.id = :fileId")
void updateProcessingStatus(String fileId, ProcessingStatus processingStatus, OffsetDateTime lastUpdated, boolean hasHighlights, long fileSize, int processingErrorCounter);
@Query("update FileEntity f set f.processingStatus = :processingStatus, f.lastUpdated = :lastUpdated," + " f.hasHighlights = :hasHighlights, f.processingErrorCounter = :processingErrorCounter " + " where f.id = :fileId")
void updateProcessingStatus(String fileId, ProcessingStatus processingStatus, OffsetDateTime lastUpdated, boolean hasHighlights, int processingErrorCounter);
@Modifying(clearAutomatically = true)

View File

@ -1,11 +1,19 @@
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository;
import java.util.Optional;
import org.springframework.data.jpa.repository.JpaRepository;
import com.iqser.red.service.persistence.management.v1.processor.entity.notification.NotificationPreferencesEntity;
public interface NotificationPreferencesRepository extends JpaRepository<NotificationPreferencesEntity, String> {
boolean existsByUserId(String userId);
Optional<NotificationPreferencesEntity> findByUserId(String userId);
NotificationPreferencesEntity getByUserId(String userId);
void deleteByUserId(String userId);
}

View File

@ -1,7 +1,6 @@
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository;
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry;
import java.util.List;
import java.util.Set;
import javax.transaction.Transactional;
@ -11,12 +10,7 @@ import org.springframework.data.jpa.repository.Query;
import com.iqser.red.service.persistence.management.v1.processor.entity.configuration.DictionaryEntryEntity;
public interface EntryRepository extends JpaRepository<DictionaryEntryEntity, Long> {
@Modifying
@Query("update DictionaryEntryEntity e set e.deleted = true, e.version = :version where e.typeId = :typeId and e.value in :values")
void deleteAllByTypeIdAndVersionAndValueIn(String typeId, long version, List<String> values);
public interface EntryRepository extends EntryRepositoryCustom, JpaRepository<DictionaryEntryEntity, Long> {
@Modifying
@Query("update DictionaryEntryEntity e set e.version = :version where e.typeId = :typeId and e.deleted = false")
@ -31,18 +25,12 @@ public interface EntryRepository extends JpaRepository<DictionaryEntryEntity, Lo
List<DictionaryEntryEntity> findByTypeIdAndVersionGreaterThan(String typeId, long version);
@Modifying
@Modifying(flushAutomatically = true, clearAutomatically = true)
@Transactional
@Query("update DictionaryEntryEntity e set e.deleted = true, e.version = :version where e.typeId = :typeId")
void deleteAllEntriesForTypeId(String typeId, long version);
@Modifying(flushAutomatically = true, clearAutomatically = true)
@Transactional
@Query(value = "update dictionary_entry set deleted = false, version = :version where type_id = :typeId and value in (:entries) returning value", nativeQuery = true)
List<String> undeleteEntries(String typeId, Set<String> entries, long version);
@Modifying(flushAutomatically = true, clearAutomatically = true)
@Transactional
@Query(value = "insert into dictionary_entry (value, version, deleted, type_id) " + " select value, 1, false, :newTypeId from dictionary_entry where type_id = :originalTypeId and deleted = false", nativeQuery = true)

View File

@ -0,0 +1,13 @@
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry;
import java.util.List;
import java.util.Set;
public interface EntryRepositoryCustom {
List<String> undeleteEntries(String typeId, Set<String> entries, long version);
void deleteAllByTypeIdAndVersionAndValueIn(String typeId, Set<String> entries, long version);
}

View File

@ -0,0 +1,35 @@
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry;
import java.util.List;
import java.util.Set;
import org.springframework.stereotype.Repository;
import lombok.AccessLevel;
import lombok.RequiredArgsConstructor;
import lombok.experimental.FieldDefaults;
@RequiredArgsConstructor
@FieldDefaults(makeFinal = true, level = AccessLevel.PRIVATE)
@Repository
public class EntryRepositoryImpl implements EntryRepositoryCustom {
private static final String TABLE_NAME = "dictionary_entry";
QueryExecutor queryExecutor;
@Override
public List<String> undeleteEntries(String typeId, Set<String> entries, long version) {
return queryExecutor.runUndeleteQueryInBatches(typeId, entries, version, TABLE_NAME);
}
@Override
public void deleteAllByTypeIdAndVersionAndValueIn(String typeId, Set<String> entries, long version) {
queryExecutor.runDeleteQueryInBatches(typeId, entries, version, TABLE_NAME);
}
}

View File

@ -1,7 +1,6 @@
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository;
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry;
import java.util.List;
import java.util.Set;
import javax.transaction.Transactional;
@ -11,12 +10,7 @@ import org.springframework.data.jpa.repository.Query;
import com.iqser.red.service.persistence.management.v1.processor.entity.configuration.DictionaryFalsePositiveEntryEntity;
public interface FalsePositiveEntryRepository extends JpaRepository<DictionaryFalsePositiveEntryEntity, Long> {
@Modifying
@Query("update DictionaryFalsePositiveEntryEntity e set e.deleted = true , e.version = :version where e.typeId = :typeId and e.value in :values")
void deleteAllByTypeIdAndVersionAndValueIn(String typeId, long version, List<String> values);
public interface FalsePositiveEntryRepository extends FalsePositiveEntryRepositoryCustom, JpaRepository<DictionaryFalsePositiveEntryEntity, Long> {
@Modifying
@Query("update DictionaryFalsePositiveEntryEntity e set e.version = :version where e.typeId = :typeId and e.deleted = false")
@ -32,12 +26,6 @@ public interface FalsePositiveEntryRepository extends JpaRepository<DictionaryFa
void deleteAllEntriesForTypeId(String typeId, long version);
@Modifying
@Transactional
@Query(value = "update dictionary_false_positive_entry set deleted = false, version = :version where type_id = :typeId and value in (:entries) returning value", nativeQuery = true)
List<String> undeleteEntries(String typeId, Set<String> entries, long version);
@Modifying(flushAutomatically = true, clearAutomatically = true)
@Transactional
@Query(value = "insert into dictionary_false_positive_entry (value, version, deleted, type_id) " + " select value, 1, false, :newTypeId from dictionary_false_positive_entry where type_id = :originalTypeId and deleted = false", nativeQuery = true)

View File

@ -0,0 +1,13 @@
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry;
import java.util.List;
import java.util.Set;
public interface FalsePositiveEntryRepositoryCustom {
List<String> undeleteEntries(String typeId, Set<String> entries, long version);
void deleteAllByTypeIdAndVersionAndValueIn(String typeId, Set<String> entries, long version);
}

View File

@ -0,0 +1,35 @@
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry;
import java.util.List;
import java.util.Set;
import org.springframework.stereotype.Repository;
import lombok.AccessLevel;
import lombok.RequiredArgsConstructor;
import lombok.experimental.FieldDefaults;
@RequiredArgsConstructor
@FieldDefaults(makeFinal = true, level = AccessLevel.PRIVATE)
@Repository
class FalsePositiveEntryRepositoryImpl implements FalsePositiveEntryRepositoryCustom {
private static final String TABLE_NAME = "dictionary_false_positive_entry";
QueryExecutor queryExecutor;
@Override
public List<String> undeleteEntries(String typeId, Set<String> entries, long version) {
return queryExecutor.runUndeleteQueryInBatches(typeId, entries, version, TABLE_NAME);
}
@Override
public void deleteAllByTypeIdAndVersionAndValueIn(String typeId, Set<String> entries, long version) {
queryExecutor.runDeleteQueryInBatches(typeId, entries, version, TABLE_NAME);
}
}

View File

@ -1,7 +1,6 @@
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository;
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry;
import java.util.List;
import java.util.Set;
import javax.transaction.Transactional;
@ -11,12 +10,7 @@ import org.springframework.data.jpa.repository.Query;
import com.iqser.red.service.persistence.management.v1.processor.entity.configuration.DictionaryFalseRecommendationEntryEntity;
public interface FalseRecommendationEntryRepository extends JpaRepository<DictionaryFalseRecommendationEntryEntity, Long> {
@Modifying
@Query("update DictionaryFalseRecommendationEntryEntity e set e.deleted = true , e.version = :version where e.typeId = :typeId and e.value in :values")
void deleteAllByTypeIdAndVersionAndValueIn(String typeId, long version, List<String> values);
public interface FalseRecommendationEntryRepository extends FalseRecommendationEntryRepositoryCustom, JpaRepository<DictionaryFalseRecommendationEntryEntity, Long> {
@Modifying
@Query("update DictionaryFalseRecommendationEntryEntity e set e.version = :version where e.typeId = :typeId and e.deleted = false")
@ -32,12 +26,6 @@ public interface FalseRecommendationEntryRepository extends JpaRepository<Dictio
void deleteAllEntriesForTypeId(String typeId, long version);
@Modifying
@Transactional
@Query(value = "update dictionary_false_recommendation_entry set deleted = false, version = :version where type_id = :typeId and value in (:entries) returning value", nativeQuery = true)
List<String> undeleteEntries(String typeId, Set<String> entries, long version);
@Modifying(flushAutomatically = true, clearAutomatically = true)
@Transactional
@Query(value = "insert into dictionary_false_recommendation_entry (value, version, deleted, type_id) " + " select value, 1, false, :newTypeId from dictionary_false_recommendation_entry where type_id = :originalTypeId and deleted = false", nativeQuery = true)

View File

@ -0,0 +1,13 @@
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry;
import java.util.List;
import java.util.Set;
public interface FalseRecommendationEntryRepositoryCustom {
List<String> undeleteEntries(String typeId, Set<String> entries, long version);
void deleteAllByTypeIdAndVersionAndValueIn(String typeId, Set<String> entries, long version);
}

View File

@ -0,0 +1,35 @@
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry;
import java.util.List;
import java.util.Set;
import org.springframework.stereotype.Repository;
import lombok.AccessLevel;
import lombok.RequiredArgsConstructor;
import lombok.experimental.FieldDefaults;
@RequiredArgsConstructor
@FieldDefaults(makeFinal = true, level = AccessLevel.PRIVATE)
@Repository
class FalseRecommendationEntryRepositoryImpl implements FalseRecommendationEntryRepositoryCustom {
private static final String TABLE_NAME = "dictionary_false_recommendation_entry";
QueryExecutor queryExecutor;
@Override
public List<String> undeleteEntries(String typeId, Set<String> entries, long version) {
return queryExecutor.runUndeleteQueryInBatches(typeId, entries, version, TABLE_NAME);
}
@Override
public void deleteAllByTypeIdAndVersionAndValueIn(String typeId, Set<String> entries, long version) {
queryExecutor.runDeleteQueryInBatches(typeId, entries, version, TABLE_NAME);
}
}

View File

@ -0,0 +1,124 @@
package com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.Set;
import javax.persistence.EntityManager;
import javax.persistence.Query;
import javax.transaction.Transactional;
import org.springframework.stereotype.Component;
import lombok.AccessLevel;
import lombok.RequiredArgsConstructor;
import lombok.experimental.FieldDefaults;
@RequiredArgsConstructor
@Component
@FieldDefaults(makeFinal = true, level = AccessLevel.PRIVATE)
class QueryExecutor {
private static final String FETCH_ENTRY_VALUES_QUERY = """
select value from ::tableName::
where type_id = :typeId and value in (:entries)""";
private static final String UPDATE_ENTRIES_QUERY = """
update ::tableName::
set deleted = ::deleted::, version = :version
where type_id = :typeId and value in (:entries)""";
// Currently (2023-04-13) there is a limitation in the Postgres JDBC driver, that limits the number of elements in a "IN" clause
// to the max value of a 'short'. We subtract a small value to be on the safe side, since it is unclear what contributes
// to the number of elements, only the elements or parentheses etc.
private static final int ELEMENT_CHUNK_SIZE = Short.MAX_VALUE - 10;
EntityManager entityManager;
@Transactional
public LinkedList<String> runUndeleteQueryInBatches(String typeId, Set<String> entries, long version, String tableName) {
return runUpdateQueryInBatches(typeId, entries, version, tableName, false, true);
}
private LinkedList<String> runUpdateQueryInBatches(String typeId, Set<String> entries, long version, String tableName, boolean deleted, boolean collectChangedValues) {
var results = new LinkedList<String>();
var entryList = new ArrayList<>(entries);
for (int fromIndex = 0, toIndex = ELEMENT_CHUNK_SIZE; ; ) {
toIndex = Math.min(toIndex, entryList.size());
if (fromIndex >= entryList.size()) {
break;
}
var values = entryList.subList(fromIndex, toIndex);
if (collectChangedValues) {
var entryValues = executeFetchValuesQuery(typeId, tableName, values);
results.addAll(entryValues);
}
executeUpdateQuery(typeId, version, tableName, values, deleted);
fromIndex += ELEMENT_CHUNK_SIZE;
toIndex += ELEMENT_CHUNK_SIZE;
}
return results;
}
private void executeUpdateQuery(String typeId, long version, String tableName, List<String> values, boolean deleted) {
String updateSql = getUpdateEntriesQuery(tableName, deleted);
Query updateEntriesQuery = entityManager.createNativeQuery(updateSql);
updateEntriesQuery.setParameter("typeId", typeId);
updateEntriesQuery.setParameter("version", version);
updateEntriesQuery.setParameter("entries", values);
updateEntriesQuery.executeUpdate();
}
// The call to query.getResultList returns an untyped list, there is no way around that.
// So we suppress the warning.
// CAUTION: Make sure that the query actually returns a list of Strings.
@SuppressWarnings("unchecked")
private List<String> executeFetchValuesQuery(String typeId, String tableName, List<String> values) {
String fetchSql = getFetchEntryValuesQuery(tableName);
Query fetchEntryValuesQuery = entityManager.createNativeQuery(fetchSql);
fetchEntryValuesQuery.setParameter("typeId", typeId);
fetchEntryValuesQuery.setParameter("entries", values);
return fetchEntryValuesQuery.getResultList();
}
private String getFetchEntryValuesQuery(String tableName) {
return FETCH_ENTRY_VALUES_QUERY.replace("::tableName::", tableName);
}
private String getUpdateEntriesQuery(String tableName, boolean deleted) {
return UPDATE_ENTRIES_QUERY.replace("::tableName::", tableName).replace("::deleted::", Boolean.toString(deleted));
}
@Transactional
public void runDeleteQueryInBatches(String typeId, Set<String> entries, long version, String tableName) {
runUpdateQueryInBatches(typeId, entries, version, tableName, true, false);
}
}

View File

@ -72,6 +72,7 @@
<dependency>
<groupId>com.iqser.red.commons</groupId>
<artifactId>storage-commons</artifactId>
<version>1.8.1</version>
</dependency>
<dependency>
<groupId>com.iqser.red.service</groupId>

View File

@ -2,10 +2,13 @@ package com.iqser.red.service.peristence.v1.server.controller;
import static com.iqser.red.service.persistence.management.v1.processor.utils.MagicConverter.convert;
import java.util.Comparator;
import java.util.List;
import java.util.stream.Collectors;
import javax.transaction.Transactional;
import org.apache.commons.lang3.StringUtils;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestParam;
@ -98,9 +101,12 @@ public class DictionaryController implements DictionaryResource {
var entity = dictionaryPersistenceService.getType(typeId);
var target = convert(entity, Type.class);
target.setEntries(convert(entryPersistenceService.getEntries(typeId, DictionaryEntryType.ENTRY, fromVersion), DictionaryEntry.class));
target.setFalsePositiveEntries(convert(entryPersistenceService.getEntries(typeId, DictionaryEntryType.FALSE_POSITIVE, fromVersion), DictionaryEntry.class));
target.setFalseRecommendationEntries(convert(entryPersistenceService.getEntries(typeId, DictionaryEntryType.FALSE_RECOMMENDATION, fromVersion), DictionaryEntry.class));
target.setEntries(convert(entryPersistenceService.getEntries(typeId, DictionaryEntryType.ENTRY, fromVersion), DictionaryEntry.class)
.stream().sorted(Comparator.comparing(input -> StringUtils.lowerCase(input.getValue()))).collect(Collectors.toList()));
target.setFalsePositiveEntries(convert(entryPersistenceService.getEntries(typeId, DictionaryEntryType.FALSE_POSITIVE, fromVersion), DictionaryEntry.class)
.stream().sorted(Comparator.comparing(input -> StringUtils.lowerCase(input.getValue()))).collect(Collectors.toList()));
target.setFalseRecommendationEntries(convert(entryPersistenceService.getEntries(typeId, DictionaryEntryType.FALSE_RECOMMENDATION, fromVersion), DictionaryEntry.class)
.stream().sorted(Comparator.comparing(input -> StringUtils.lowerCase(input.getValue()))).collect(Collectors.toList()));
return target;
}

View File

@ -19,7 +19,6 @@ public class DossierStatsController implements DossierStatsResource {
private final DossierStatsService dossierStatsService;
@Deprecated
@Override
public DossierStats getDossierStats(String dossierId) {
@ -27,7 +26,6 @@ public class DossierStatsController implements DossierStatsResource {
}
@Deprecated
@Override
public List<DossierStats> getDossierStats(Set<String> dossierIds) {

View File

@ -19,11 +19,9 @@ public class LicenseReportController implements LicenseReportResource {
@Override
public LicenseReport getLicenseReport(@RequestBody LicenseReportRequest licenseReportRequest,
@RequestParam(value = "offset", defaultValue = "0") int offset,
@RequestParam(value = "limit", defaultValue = "20") int limit) {
public LicenseReport getLicenseReport(@RequestBody LicenseReportRequest licenseReportRequest) {
return licenseReportService.getLicenseReport(licenseReportRequest, offset, limit);
return licenseReportService.getLicenseReport(licenseReportRequest);
}
}

View File

@ -14,6 +14,7 @@ import org.quartz.Calendar;
import org.quartz.JobDetail;
import org.quartz.Scheduler;
import org.quartz.Trigger;
import org.quartz.impl.jdbcjobstore.PostgreSQLDelegate;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
@ -58,6 +59,7 @@ public class CustomQuartzConfiguration {
@Bean
@ConditionalOnMissingBean
public SchedulerFactoryBean quartzScheduler(QuartzProperties properties,
@Qualifier("masterDataSource") DataSource dataSource,
ObjectProvider<SchedulerFactoryBeanCustomizer> customizers,
ObjectProvider<JobDetail> jobDetails,
Map<String, Calendar> calendars,
@ -72,6 +74,7 @@ public class CustomQuartzConfiguration {
schedulerFactoryBean.setSchedulerName(properties.getSchedulerName());
}
schedulerFactoryBean.setDataSource(dataSource);
schedulerFactoryBean.setAutoStartup(properties.isAutoStartup());
schedulerFactoryBean.setStartupDelay((int) properties.getStartupDelay().getSeconds());
schedulerFactoryBean.setWaitForJobsToCompleteOnShutdown(properties.isWaitForJobsToCompleteOnShutdown());
@ -80,11 +83,11 @@ public class CustomQuartzConfiguration {
schedulerFactoryBean.setQuartzProperties(this.asProperties(properties.getProperties()));
}
schedulerFactoryBean.setJobDetails((JobDetail[]) jobDetails.orderedStream().toArray((x$0) -> {
schedulerFactoryBean.setJobDetails(jobDetails.orderedStream().toArray((x$0) -> {
return new JobDetail[x$0];
}));
schedulerFactoryBean.setCalendars(calendars);
schedulerFactoryBean.setTriggers((Trigger[]) triggers.orderedStream().toArray((x$0) -> {
schedulerFactoryBean.setTriggers(triggers.orderedStream().toArray((x$0) -> {
return new Trigger[x$0];
}));
customizers.orderedStream().forEach((customizer) -> {
@ -135,7 +138,7 @@ public class CustomQuartzConfiguration {
private DataSource getDataSource(DataSource dataSource, ObjectProvider<DataSource> quartzDataSource) {
DataSource dataSourceIfAvailable = (DataSource) quartzDataSource.getIfAvailable();
DataSource dataSourceIfAvailable = quartzDataSource.getIfAvailable();
return dataSourceIfAvailable != null ? dataSourceIfAvailable : dataSource;
}
@ -143,8 +146,8 @@ public class CustomQuartzConfiguration {
private PlatformTransactionManager getTransactionManager(ObjectProvider<PlatformTransactionManager> transactionManager,
ObjectProvider<PlatformTransactionManager> quartzTransactionManager) {
PlatformTransactionManager transactionManagerIfAvailable = (PlatformTransactionManager) quartzTransactionManager.getIfAvailable();
return transactionManagerIfAvailable != null ? transactionManagerIfAvailable : (PlatformTransactionManager) transactionManager.getIfUnique();
PlatformTransactionManager transactionManagerIfAvailable = quartzTransactionManager.getIfAvailable();
return transactionManagerIfAvailable != null ? transactionManagerIfAvailable : transactionManager.getIfUnique();
}
@ -164,7 +167,7 @@ public class CustomQuartzConfiguration {
OnQuartzDatasourceInitializationCondition() {
super("Quartz", new String[]{"spring.quartz.jdbc.initialize-schema"});
super("Quartz", "spring.quartz.jdbc.initialize-schema");
}
}

View File

@ -1,9 +1,9 @@
package com.iqser.red.service.peristence.v1.server.service;
import static com.iqser.red.service.persistence.management.v1.processor.utils.MagicConverter.convert;
import static java.util.stream.Collectors.toList;
import static java.util.stream.Collectors.toSet;
import java.util.HashSet;
import java.util.List;
import java.util.Optional;
import java.util.Set;
@ -11,6 +11,8 @@ import java.util.function.Predicate;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import javax.transaction.Transactional;
import org.apache.commons.collections4.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.springframework.stereotype.Service;
@ -41,6 +43,7 @@ public class DictionaryService {
private final StopwordService stopwordService;
@Transactional
public Type addType(Type typeRequest) {
if (typeRequest.getDossierTemplateId() == null) {
@ -182,13 +185,13 @@ public class DictionaryService {
var currentVersion = getCurrentVersion(typeResult);
if (typeResult.isCaseInsensitive()) {
List<String> existing = entryPersistenceService.getEntries(typeId, dictionaryEntryType, null).stream().map(BaseDictionaryEntry::getValue).collect(toList());
List<String> existing = entryPersistenceService.getEntries(typeId, dictionaryEntryType, null).stream().map(BaseDictionaryEntry::getValue).toList();
entryPersistenceService.deleteEntries(typeId,
existing.stream().filter(e -> entries.stream().anyMatch(e::equalsIgnoreCase)).collect(toList()),
existing.stream().filter(e -> entries.stream().anyMatch(e::equalsIgnoreCase)).collect(toSet()),
currentVersion + 1,
dictionaryEntryType);
} else {
entryPersistenceService.deleteEntries(typeId, entries, currentVersion + 1, dictionaryEntryType);
entryPersistenceService.deleteEntries(typeId, new HashSet<>(entries), currentVersion + 1, dictionaryEntryType);
}
dictionaryPersistenceService.incrementVersion(typeId);

View File

@ -24,7 +24,6 @@ import java.util.stream.Collectors;
import javax.transaction.Transactional;
import com.iqser.red.service.peristence.v1.server.settings.FileManagementServiceSettings;
import org.apache.commons.collections4.CollectionUtils;
import org.apache.commons.compress.archivers.zip.ZipArchiveEntry;
import org.apache.commons.compress.archivers.zip.ZipArchiveInputStream;
@ -36,6 +35,7 @@ import org.springframework.web.bind.annotation.RequestBody;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
import com.iqser.red.service.peristence.v1.server.settings.FileManagementServiceSettings;
import com.iqser.red.service.peristence.v1.server.utils.FileUtils;
import com.iqser.red.service.persistence.management.v1.processor.entity.configuration.BaseDictionaryEntry;
import com.iqser.red.service.persistence.management.v1.processor.entity.configuration.ColorsEntity;
@ -376,22 +376,16 @@ public class DossierTemplateImportService {
private void validateDossierTemplateName(DossierTemplate dossierTemplateMeta) {
boolean cond = true;
int index = 0;
int nameSuffix = 0;
String dossierTemplateName = dossierTemplateMeta.getName();
do {
try {
dossierTemplatePersistenceService.validateDossierTemplateNameIsUnique(dossierTemplateMeta.getName());
cond = false;
} catch (ConflictException e) {
if (index == 0) {
dossierTemplateMeta.setName("Copy of " + dossierTemplateName);
} else {
dossierTemplateMeta.setName("Copy of " + dossierTemplateName + " - " + index);
}
index++;
while (dossierTemplatePersistenceService.isDossierTemplateNameNotUnique(dossierTemplateMeta.getName())) {
if (nameSuffix == 0) {
dossierTemplateMeta.setName("Copy of " + dossierTemplateName);
} else {
dossierTemplateMeta.setName("Copy of " + dossierTemplateName + " - " + nameSuffix);
}
} while (cond);
nameSuffix++;
}
}
@ -449,10 +443,8 @@ public class DossierTemplateImportService {
dictionaryPersistenceService.incrementVersion(typeId);
typeIdsAdded.add(typeId); // added to the list, since the type can not be deleted
});
Set<String> typesToRemove = currentTypes.stream()
.filter(t -> !t.isDeleted()) // remove the ones already soft deleted
.map(TypeEntity::getId)
.filter(t -> !typeIdsAdded.contains(t)) // exclude the type ids already added from the import
Set<String> typesToRemove = currentTypes.stream().filter(t -> !t.isDeleted()) // remove the ones already soft deleted
.map(TypeEntity::getId).filter(t -> !typeIdsAdded.contains(t)) // exclude the type ids already added from the import
.filter(t -> !currentTypesIdSystemManaged.contains(t)) // exclude the types system managed
.collect(Collectors.toSet());
typesToRemove.forEach(dictionaryService::deleteType);
@ -568,7 +560,7 @@ public class DossierTemplateImportService {
double compressionRatio = (float) totalSizeEntry / ze.getCompressedSize();
if (compressionRatio > settings.getCompressionThresholdRatio()) {
log.debug("zip entry: " + ze.getName() + " - totalSizeEntry: " + totalSizeEntry + " ze.getCompressedSize(): " + ze.getCompressedSize() + " compressionRatio: " + compressionRatio);
log.debug("zip entry: " + ze.getName() + " - totalSizeEntry: " + totalSizeEntry + " ze.getCompressedSize(): " + ze.getCompressedSize() + " compressionRatio: " + compressionRatio);
// ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack
throw new BadRequestException("ZIP-Bomb detected (compressionRatio).");
}

View File

@ -71,7 +71,7 @@ public class FileService {
} else {
// the file is new, should create a new status for it.
log.info("File {} has no status yet, creating one and setting to unprocessed.", request.getFilename());
fileStatusService.createStatus(request.getDossierId(), request.getFileId(), request.getUploader(), request.getFilename());
fileStatusService.createStatus(request.getDossierId(), request.getFileId(), request.getUploader(), request.getFilename(), request.getFileSize());
}
return new JSONPrimitive<>(request.getFileId());

View File

@ -6,7 +6,6 @@ import org.springframework.web.bind.annotation.RestController;
import com.iqser.red.service.pdftron.redaction.v1.api.model.UntouchedDocumentResponse;
import com.iqser.red.service.peristence.v1.server.settings.FileManagementServiceSettings;
import com.iqser.red.service.persistence.management.v1.processor.exception.ConflictException;
import com.iqser.red.service.persistence.management.v1.processor.model.OCRStatusUpdateResponse;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.DossierPersistenceService;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.FileStatusPersistenceService;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.dossier.file.ProcessingStatus;
@ -131,7 +130,6 @@ public class FileStatusProcessingUpdateService {
retryTemplate.execute(retryContext -> {
log.info("OCR Successful for dossier {} and file {}, Attempt to update status: {}", dossierId, fileId, retryContext.getRetryCount());
fileStatusService.updateOCRStatus(OCRStatusUpdateResponse.builder().fileId(fileId).ocrFinished(true).build());
fileStatusService.setStatusFullReprocess(dossierId, fileId, false, true);
return null;

View File

@ -103,12 +103,11 @@ public class FileStatusService {
return reanalysisRequiredStatusService.enhanceFileStatusWithAnalysisRequirements(convertedList);
}
@Transactional
public List<FileModel> getStatusesAddedBefore(OffsetDateTime end) {
public List<FileModel> getStatusesForDossiersAndTimePeriod(Set<String> dossierIds, OffsetDateTime start, OffsetDateTime end) {
var fileEntities = fileStatusPersistenceService.getStatusesForDossiersAndTimePeriod(dossierIds, start, end);
var convertedList = convert(fileEntities, FileModel.class, new FileModelMapper());
return reanalysisRequiredStatusService.enhanceFileStatusWithAnalysisRequirements(convertedList);
var fileEntities = fileStatusPersistenceService.getStatusesAddedBefore(end);
return convert(fileEntities, FileModel.class, new FileModelMapper());
}
@ -279,9 +278,9 @@ public class FileStatusService {
@Transactional
public void createStatus(String dossierId, String fileId, String uploader, String filename) {
public void createStatus(String dossierId, String fileId, String uploader, String filename, long fileSize) {
fileStatusPersistenceService.createStatus(dossierId, fileId, filename, uploader);
fileStatusPersistenceService.createStatus(dossierId, fileId, filename, uploader, fileSize);
addToAnalysisQueue(dossierId, fileId, false, Set.of(), false);
}

View File

@ -1,27 +1,28 @@
package com.iqser.red.service.peristence.v1.server.service;
import java.time.Duration;
import java.time.Instant;
import java.time.OffsetDateTime;
import java.time.YearMonth;
import java.time.ZoneId;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.UUID;
import java.util.function.Function;
import java.util.stream.Collectors;
import org.apache.commons.collections4.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.springframework.stereotype.Service;
import com.iqser.red.service.persistence.management.v1.processor.entity.dossier.DossierEntity;
import com.iqser.red.service.persistence.management.v1.processor.exception.BadRequestException;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.dossier.file.FileModel;
import com.iqser.red.service.persistence.service.v1.api.model.license.LicenseReport;
import com.iqser.red.service.persistence.service.v1.api.model.license.LicenseReportRequest;
import com.iqser.red.service.persistence.service.v1.api.model.license.ReportData;
import com.iqser.red.service.persistence.service.v1.api.model.license.MonthlyReportData;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
@ -34,107 +35,180 @@ public class LicenseReportService {
private final FileStatusService fileStatusService;
private final DossierService dossierService;
public LicenseReport getLicenseReport(LicenseReportRequest licenseReportRequest) {
public LicenseReport getLicenseReport(LicenseReportRequest licenseReportRequest, int offset, int limit) {
log.info("Generating licence-report");
Instant start = null;
if (log.isInfoEnabled()) {
start = Instant.now();
if (licenseReportRequest.getStartDate() == null || licenseReportRequest.getStartDate().isAfter(Instant.now())) {
throw new BadRequestException("Invalid start date.");
}
if (StringUtils.isEmpty(licenseReportRequest.getRequestId())) {
licenseReportRequest.setRequestId(UUID.randomUUID().toString());
if (licenseReportRequest.getStartDate().isAfter(licenseReportRequest.getEndDate())) {
throw new BadRequestException("Invalid date period: End date is before start date.");
}
DetailedReportData detailedReportData = loadDetailedReportData(licenseReportRequest);
var files = fileStatusService.getStatusesAddedBefore(OffsetDateTime.ofInstant(licenseReportRequest.getEndDate(), UTC_ZONE_ID));
var addDossiers = dossierService.getAllDossiers();
LicenseReport licenseReport = new LicenseReport();
licenseReport.setNumberOfAnalyzedPages(detailedReportData.getTotalPagesAnalyzed());
licenseReport.setNumberOfAnalyzedFiles(detailedReportData.getData().size());
licenseReport.setNumberOfOcrFiles((int) detailedReportData.getData().stream().filter(reportData -> reportData.getNumberOfOcrPages() > 0).count());
licenseReport.setNumberOfOcrPages(detailedReportData.getTotalOcrPages());
licenseReport.setNumberOfDossiers(detailedReportData.getNumberOfDossiers());
licenseReport.setNumberOfAnalyses(detailedReportData.getTotalNumberOfAnalyses());
licenseReport.setData(detailedReportData.getData().subList(offset, Math.min(offset + limit, detailedReportData.getData().size())));
licenseReport.setOffset(offset);
licenseReport.setLimit(limit);
licenseReport.setStartDate(detailedReportData.getStartDate());
licenseReport.setEndDate(detailedReportData.getEndDate());
licenseReport.setRequestId(licenseReportRequest.getRequestId());
files.sort(Comparator.comparing(FileModel::getAdded));
if (start != null) {
log.info("getLicenceReport took {} to process", Duration.between(start, Instant.now()).toString());
var dossiersById = addDossiers.stream().collect(Collectors.toMap(DossierEntity::getId, Function.identity()));
Map<YearMonth, Set<FileModel>> adds = new HashMap<>();
Map<YearMonth, Set<FileModel>> softDeletes = new HashMap<>();
Map<YearMonth, Set<FileModel>> archives = new HashMap<>();
Map<YearMonth, Set<FileModel>> hardDeletes = new HashMap<>();
Map<YearMonth, Set<FileModel>> ocrs = new HashMap<>();
for (var file : files) {
adds.computeIfAbsent(YearMonth.from(file.getAdded()), entry -> new HashSet<>()).add(file);
if (file.getDeleted() != null && file.getDeleted().toInstant().isBefore(licenseReportRequest.getEndDate())) {
softDeletes.computeIfAbsent(YearMonth.from(file.getDeleted()), entry -> new HashSet<>()).add(file);
}
if(dossiersById.get(file.getDossierId()).getSoftDeletedTime() != null && dossiersById.get(file.getDossierId()).getSoftDeletedTime().toInstant().isBefore(licenseReportRequest.getEndDate())){
softDeletes.computeIfAbsent(YearMonth.from(dossiersById.get(file.getDossierId()).getSoftDeletedTime()), entry -> new HashSet<>()).add(file);
}
if (file.getHardDeletedTime() != null && file.getHardDeletedTime().toInstant().isBefore(licenseReportRequest.getEndDate())) {
hardDeletes.computeIfAbsent(YearMonth.from(file.getHardDeletedTime()), entry -> new HashSet<>()).add(file);
}
if(dossiersById.get(file.getDossierId()).getHardDeletedTime() != null && dossiersById.get(file.getDossierId()).getHardDeletedTime().toInstant().isBefore(licenseReportRequest.getEndDate())){
hardDeletes.computeIfAbsent(YearMonth.from(dossiersById.get(file.getDossierId()).getHardDeletedTime()), entry -> new HashSet<>()).add(file);
}
if (dossiersById.get(file.getDossierId()).getArchivedTime() != null && dossiersById.get(file.getDossierId())
.getArchivedTime()
.toInstant()
.isBefore(licenseReportRequest.getEndDate())) {
archives.computeIfAbsent(YearMonth.from(dossiersById.get(file.getDossierId()).getArchivedTime()), entry -> new HashSet<>()).add(file);
}
if (file.getOcrStartTime() != null && file.getOcrStartTime().toInstant().isBefore(licenseReportRequest.getEndDate())) {
ocrs.computeIfAbsent(YearMonth.from(file.getOcrStartTime()), entry -> new HashSet<>()).add(file);
}
}
return licenseReport;
YearMonth currentMonth = !files.isEmpty() && files.get(0).getAdded().toInstant().isBefore(licenseReportRequest.getStartDate()) ? YearMonth.from(files.get(0)
.getAdded()) : YearMonth.from(licenseReportRequest.getStartDate().atZone(UTC_ZONE_ID).toLocalDate());
YearMonth endMonth = YearMonth.from(licenseReportRequest.getEndDate().atZone(UTC_ZONE_ID).toLocalDate());
YearMonth reportStartMonth = YearMonth.from(licenseReportRequest.getStartDate().atZone(UTC_ZONE_ID).toLocalDate());
}
List<MonthlyReportData> monthlyData = new ArrayList<>();
// This values are what is in system at that point. Including all added values before the period.
long activeFilesUploadedBytes = 0;
long trashFilesUploadedBytes = 0;
long archivedFilesUploadedBytes = 0;
private DetailedReportData loadDetailedReportData(LicenseReportRequest licenseReportRequest) {
// This values are not what is currently in the system at that point. They are what is added in the period.
int numberOfAnalyzedPages = 0;
int numberOfOcrPages = 0;
int numberOfAnalyzedFiles = 0;
long analysedFilesBytes = 0;
int numberOfOcrFiles = 0;
log.debug("No licence-report found in cache, generating new report");
Instant start = null;
if (log.isInfoEnabled()) {
start = Instant.now();
while (!currentMonth.isAfter(endMonth)) {
int currentMonthNumberOfAnalyzedPages = 0;
int currentanalysedFilesBytes = 0;
int currentMonthNumberOfOcrPages = 0;
var addedFilesInMonth = adds.get(currentMonth);
if (addedFilesInMonth != null) {
for (var add : addedFilesInMonth) {
activeFilesUploadedBytes += add.getFileSize();
if (add.getAdded().toInstant().isAfter(licenseReportRequest.getStartDate())) {
numberOfAnalyzedPages += add.getNumberOfPages();
currentMonthNumberOfAnalyzedPages += add.getNumberOfPages();
numberOfAnalyzedFiles++;
analysedFilesBytes += add.getFileSize();
currentanalysedFilesBytes += add.getFileSize();
}
}
}
var softDeletedFilesInMonth = softDeletes.get(currentMonth);
if (softDeletedFilesInMonth != null) {
for (var softDeleted : softDeletedFilesInMonth) {
if (dossiersById.get(softDeleted.getDossierId()).getArchivedTime() != null) {
archivedFilesUploadedBytes -= softDeleted.getFileSize();
} else {
activeFilesUploadedBytes -= softDeleted.getFileSize();
}
trashFilesUploadedBytes += softDeleted.getFileSize();
}
}
var archivedFilesInMonth = archives.get(currentMonth);
if (archivedFilesInMonth != null) {
for (var archived : archivedFilesInMonth) {
activeFilesUploadedBytes -= archived.getFileSize();
archivedFilesUploadedBytes += archived.getFileSize();
}
}
var hardDeletedFilesInMonth = hardDeletes.get(currentMonth);
if (hardDeletedFilesInMonth != null) {
for (var hardDeleted : hardDeletedFilesInMonth) {
if (hardDeleted.getDeleted() != null || dossiersById.get(hardDeleted.getDossierId()).getSoftDeletedTime() != null) {
trashFilesUploadedBytes -= hardDeleted.getFileSize();
} else {
activeFilesUploadedBytes -= hardDeleted.getFileSize();
}
}
}
var ocrFilesInMonth = ocrs.get(currentMonth);
if (ocrFilesInMonth != null) {
for (var ocrFile : ocrFilesInMonth) {
if (ocrFile.getOcrStartTime().toInstant().isAfter(licenseReportRequest.getStartDate())) {
numberOfOcrPages += ocrFile.getNumberOfPages(); // We count the entire document if ocr is performed.
currentMonthNumberOfOcrPages += ocrFile.getNumberOfPages();
numberOfOcrFiles++;
}
}
}
if (currentMonth.equals(reportStartMonth) || currentMonth.isAfter(reportStartMonth)) {
var monthEndDate = currentMonth.atEndOfMonth().atTime(23, 59).atZone(UTC_ZONE_ID).toInstant();
monthlyData.add(MonthlyReportData.builder()
.startDate(currentMonth.atDay(1).atStartOfDay(UTC_ZONE_ID).toInstant())
.endDate(monthEndDate.isBefore(licenseReportRequest.getEndDate()) ? monthEndDate : licenseReportRequest.getEndDate())
.activeFilesUploadedBytes(activeFilesUploadedBytes)
.trashFilesUploadedBytes(trashFilesUploadedBytes)
.archivedFilesUploadedBytes(archivedFilesUploadedBytes)
.totalFilesUploadedBytes(activeFilesUploadedBytes + trashFilesUploadedBytes + archivedFilesUploadedBytes)
.numberOfAnalyzedPages(currentMonthNumberOfAnalyzedPages)
.numberOfOcrPages(currentMonthNumberOfOcrPages)
.analysedFilesBytes(currentanalysedFilesBytes)
.build());
}
currentMonth = currentMonth.plusMonths(1);
}
final Set<String> dossierIds;
if (CollectionUtils.isEmpty(licenseReportRequest.getDossierIds())) {
dossierIds = dossierService.getAllDossiers().stream().map(DossierEntity::getId).collect(Collectors.toSet());
} else {
dossierIds = new HashSet<>(licenseReportRequest.getDossierIds());
}
var result = new DetailedReportData(new LinkedList<>(), dossierIds.size(), 0, 0, 0, licenseReportRequest.getStartDate(), licenseReportRequest.getEndDate());
fileStatusService.getStatusesForDossiersAndTimePeriod( //
dossierIds, //
OffsetDateTime.ofInstant(licenseReportRequest.getStartDate(), UTC_ZONE_ID), //
OffsetDateTime.ofInstant(licenseReportRequest.getEndDate(), UTC_ZONE_ID)) //
.forEach(fileStatus -> {
ReportData reportData = new ReportData();
reportData.setDossier(fileStatus.getDossierId());
reportData.setFileName(fileStatus.getFilename());
reportData.setAddedDate(fileStatus.getAdded().toInstant());
reportData.setLastUpdatedDate(fileStatus.getLastUpdated() == null ? null : fileStatus.getLastUpdated().toInstant());
reportData.setDeletedDate(fileStatus.getDeleted() == null ? null : fileStatus.getDeleted().toInstant());
reportData.setWorkflowStatus(fileStatus.getWorkflowStatus());
reportData.setNumberOfAnalyzedPages(fileStatus.getNumberOfPages());
reportData.setNumberOfOcrPages(fileStatus.getOcrStartTime() != null ? fileStatus.getNumberOfPages() : 0);
reportData.setAnalysisCount(fileStatus.getNumberOfAnalyses());
result.totalPagesAnalyzed += fileStatus.getNumberOfPages();
result.totalOcrPages += fileStatus.getOcrStartTime() != null ? fileStatus.getNumberOfPages() : 0;
result.totalNumberOfAnalyses += fileStatus.getNumberOfAnalyses();
result.data.add(reportData);
});
if (start != null) {
log.info("loadReport took {} to process", Duration.between(start, Instant.now()).toString());
}
return result;
}
// This was also used to cache results so that subsequent pagination works faster.
// Currently, pagination is unused, and it is unclear, if caching is needed.
// This intermediate object may be re-used for caching and pagination, if needed.
@Data
@AllArgsConstructor
private static class DetailedReportData {
List<ReportData> data;
int numberOfDossiers;
int totalNumberOfAnalyses;
int totalPagesAnalyzed;
int totalOcrPages;
Instant startDate;
Instant endDate;
return LicenseReport.builder()
.totalFilesUploadedBytes(activeFilesUploadedBytes + trashFilesUploadedBytes + archivedFilesUploadedBytes)
.activeFilesUploadedBytes(activeFilesUploadedBytes)
.trashFilesUploadedBytes(trashFilesUploadedBytes)
.archivedFilesUploadedBytes(archivedFilesUploadedBytes)
.numberOfAnalyzedPages(numberOfAnalyzedPages)
.numberOfOcrPages(numberOfOcrPages)
.numberOfAnalyzedFiles(numberOfAnalyzedFiles)
.analysedFilesBytes(analysedFilesBytes)
.numberOfOcrFiles(numberOfOcrFiles)
.numberOfDossiers(addDossiers.stream().filter(dossier -> //
dossier.getDate().toInstant().isAfter(licenseReportRequest.getStartDate()) //
&& dossier.getDate().toInstant().isBefore(licenseReportRequest.getEndDate()) //
&& (dossier.getHardDeletedTime() == null || dossier.getHardDeletedTime()
.isAfter(OffsetDateTime.ofInstant(licenseReportRequest.getEndDate(), UTC_ZONE_ID)))).collect(Collectors.toSet()).size())
.startDate(licenseReportRequest.getStartDate())
.endDate(licenseReportRequest.getEndDate())
.monthlyData(monthlyData)
.build();
}
}

View File

@ -15,6 +15,7 @@ import java.util.stream.Collectors;
import javax.transaction.Transactional;
import org.apache.commons.lang3.StringUtils;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
import org.springframework.stereotype.Service;
@ -60,6 +61,7 @@ import com.iqser.red.service.persistence.service.v1.api.model.annotations.Remove
import com.iqser.red.service.persistence.service.v1.api.model.annotations.ResizeRedactionRequest;
import com.iqser.red.service.persistence.service.v1.api.model.annotations.entitymapped.IdRemoval;
import com.iqser.red.service.persistence.service.v1.api.model.annotations.entitymapped.ManualRedactionEntry;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.dossier.Dossier;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.dossier.file.ProcessingStatus;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.dossier.file.WorkflowStatus;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.type.DictionaryEntryType;
@ -99,8 +101,10 @@ public class ManualRedactionService {
private final RedactionLogService redactionLogService;
private final HashFunction hashFunction = Hashing.murmur3_128();
private final int COMMENT_MAX_LENGTH = 4000;
@Transactional
public List<ManualAddResponse> addAddRedaction(String dossierId, String fileId, List<AddRedactionRequest> addRedactionRequests) {
var response = new ArrayList<ManualAddResponse>();
@ -196,46 +200,7 @@ public class ManualRedactionService {
log.info("hard delete ManualRedactions for file {} and annotation {}", fileId, removeRedactionRequest.getAnnotationId());
manualRedactionProviderService.hardDeleteManualRedactions(fileId, removeRedactionRequest.getAnnotationId());
} else {
log.info("add removeRedaction for file {} and annotation {}", fileId, removeRedactionRequest.getAnnotationId());
var idRemoval = convert(removeRedactionPersistenceService.insert(fileId, removeRedactionRequest), IdRemoval.class);
if (redactionLog == null) {
redactionLog = fileManagementStorageService.getRedactionLog(dossier.getId(), fileId);
}
Long commentId = null;
if (removeRedactionRequest.getComment() != null) {
commentId = addComment(fileId, removeRedactionRequest.getAnnotationId(), removeRedactionRequest.getComment(), removeRedactionRequest.getUser()).getId();
}
if (!removeRedactionRequest.isRemoveFromDictionary() && AnnotationStatus.APPROVED.equals(removeRedactionRequest.getStatus())) {
Optional<RedactionLogEntry> redactionLogEntryOptional = redactionLog.getRedactionLogEntry()
.stream()
.filter(entry -> entry.getId().equals(removeRedactionRequest.getAnnotationId()))
.findFirst();
var requiresAnalysis = redactionLogEntryOptional.isPresent() && redactionLogEntryOptional.get().isHint();
actionPerformed = actionPerformed || requiresAnalysis;
if (!requiresAnalysis && idRemoval.isApproved()) {
removeRedactionPersistenceService.markAsProcessed(idRemoval);
}
}
var removedFromDictionary = handleRemoveFromDictionary(redactionLog,
dossier,
fileId,
removeRedactionRequest.getAnnotationId(),
removeRedactionRequest.getStatus(),
removeRedactionRequest.isRemoveFromDictionary(),
false);
if (!removedFromDictionary && idRemoval.isApproved()) {
removeRedactionPersistenceService.markAsProcessed(idRemoval);
}
actionPerformed = actionPerformed || removedFromDictionary;
response.add(ManualAddResponse.builder().annotationId(removeRedactionRequest.getAnnotationId()).commentId(commentId).build());
actionPerformed = removeNonManualRedaction(redactionLog, fileId, removeRedactionRequest, dossier, actionPerformed, response);
}
}
@ -248,7 +213,57 @@ public class ManualRedactionService {
return response;
}
@Transactional
private boolean removeNonManualRedaction(RedactionLog redactionLog, String fileId, RemoveRedactionRequest removeRedactionRequest, DossierEntity dossier, boolean actionPerformed,
List<ManualAddResponse> response) {
log.info("add removeRedaction for file {} and annotation {}", fileId, removeRedactionRequest.getAnnotationId());
Long commentId = null;
String comment = removeRedactionRequest.getComment();
if (comment != null) {
commentId = addComment(fileId, removeRedactionRequest.getAnnotationId(), comment, removeRedactionRequest.getUser()).getId();
}
var idRemoval = convert(removeRedactionPersistenceService.insert(fileId, removeRedactionRequest), IdRemoval.class);
if (redactionLog == null) {
redactionLog = fileManagementStorageService.getRedactionLog(dossier.getId(), fileId);
}
if (!removeRedactionRequest.isRemoveFromDictionary() && AnnotationStatus.APPROVED.equals(removeRedactionRequest.getStatus())) {
Optional<RedactionLogEntry> redactionLogEntryOptional = redactionLog.getRedactionLogEntry()
.stream()
.filter(entry -> entry.getId().equals(removeRedactionRequest.getAnnotationId()))
.findFirst();
var requiresAnalysis = redactionLogEntryOptional.isPresent() && redactionLogEntryOptional.get().isHint();
actionPerformed = actionPerformed || requiresAnalysis;
if (!requiresAnalysis && idRemoval.isApproved()) {
removeRedactionPersistenceService.markAsProcessed(idRemoval);
}
}
var removedFromDictionary = handleRemoveFromDictionary(redactionLog,
dossier,
fileId,
removeRedactionRequest.getAnnotationId(),
removeRedactionRequest.getStatus(),
removeRedactionRequest.isRemoveFromDictionary(),
false);
if (!removedFromDictionary && idRemoval.isApproved()) {
removeRedactionPersistenceService.markAsProcessed(idRemoval);
}
actionPerformed = actionPerformed || removedFromDictionary;
response.add(ManualAddResponse.builder().annotationId(removeRedactionRequest.getAnnotationId()).commentId(commentId).build());
return actionPerformed;
}
@Transactional
public List<ManualAddResponse> addForceRedaction(String dossierId, String fileId, List<ForceRedactionRequest> forceRedactionRequests) {
var response = new ArrayList<ManualAddResponse>();
@ -277,6 +292,7 @@ public class ManualRedactionService {
}
@Transactional
public List<ManualAddResponse> addLegalBasisChange(String dossierId, String fileId, List<LegalBasisChangeRequest> legalBasisChangeRequests) {
var response = new ArrayList<ManualAddResponse>();
@ -299,6 +315,7 @@ public class ManualRedactionService {
}
@Transactional
public List<ManualAddResponse> addImageRecategorization(String dossierId, String fileId, List<ImageRecategorizationRequest> imageRecategorizationRequests) {
var response = new ArrayList<ManualAddResponse>();
@ -501,6 +518,7 @@ public class ManualRedactionService {
}
@Transactional
public List<ManualAddResponse> addResizeRedaction(String dossierId, String fileId, List<ResizeRedactionRequest> resizeRedactionRequests) {
var response = new ArrayList<ManualAddResponse>();
@ -912,9 +930,9 @@ public class ManualRedactionService {
}
}
private CommentEntity addComment(String fileId, String annotationId, String comment, String user) {
checkComment(comment);
return commentPersistenceService.insert(CommentEntity.builder()
.text(comment)
.fileId(fileId)
@ -924,6 +942,12 @@ public class ManualRedactionService {
.build());
}
private void checkComment(String text) {
if (!StringUtils.isEmpty(text) && text.length() > COMMENT_MAX_LENGTH) {
throw new BadRequestException(String.format("The comment is too long (%s), max length %s", text.length(), COMMENT_MAX_LENGTH));
}
}
private boolean handleAddToDictionary(String fileId,
String annotationId,

View File

@ -128,7 +128,6 @@ public class DownloadPreparationService {
}
@Transactional
public void createDownload(RedactionResultMessage reportResultMessage) {
DownloadStatusEntity downloadStatus = downloadStatusPersistenceService.getStatus(reportResultMessage.getDownloadId());
@ -141,7 +140,7 @@ public class DownloadPreparationService {
addReports(reportResultMessage.getDownloadId(), storedFileInformations, fileSystemBackedArchiver);
storeZipFile(downloadStatus, fileSystemBackedArchiver);
downloadStatusPersistenceService.updateStatus(downloadStatus.getStorageId(), DownloadStatusValue.READY, fileSystemBackedArchiver.getContentLength());
updateStatusToReady(downloadStatus, fileSystemBackedArchiver);
notificationPersistenceService.insertNotification(AddNotificationRequest.builder()
.userId(downloadStatus.getUserId())
@ -161,6 +160,12 @@ public class DownloadPreparationService {
}
private void updateStatusToReady(DownloadStatusEntity downloadStatus, FileSystemBackedArchiver fileSystemBackedArchiver) {
downloadStatusPersistenceService.updateStatus(downloadStatus, DownloadStatusValue.READY, fileSystemBackedArchiver.getContentLength());
}
private void generateAndAddFiles(DownloadStatusEntity downloadStatus, RedactionResultMessage reportResultMessage, FileSystemBackedArchiver fileSystemBackedArchiver) {
int i = 1;

View File

@ -40,6 +40,12 @@ public class RedactionResultMessageReceiver {
redactionResultMessage.getDownloadId()));
}
receive(redactionResultMessage);
}
public void receive(RedactionResultMessage redactionResultMessage) {
log.info("Received redaction results for downloadId:{}", redactionResultMessage.getDownloadId());
downloadPreparationService.createDownload(redactionResultMessage);

View File

@ -208,7 +208,7 @@ public class DossierTemplateExportService {
}
storeZipFile(downloadStatus.getStorageId(), fileSystemBackedArchiver);
downloadStatusPersistenceService.updateStatus(downloadStatus.getStorageId(), DownloadStatusValue.READY, fileSystemBackedArchiver.getContentLength());
downloadStatusPersistenceService.updateStatus(downloadStatus, DownloadStatusValue.READY, fileSystemBackedArchiver.getContentLength());
} catch (JsonProcessingException e) {
log.debug("fail ", e);

View File

@ -6,6 +6,8 @@ import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.file.Files;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.Arrays;
import java.util.HashSet;
import java.util.List;
@ -21,14 +23,31 @@ import lombok.extern.slf4j.Slf4j;
@Slf4j
public class FileSystemBackedArchiver implements AutoCloseable {
private final boolean rethrowExceptions;
private final Set<String> createdFolders = new HashSet<>();
private final File tempFile;
private final ZipOutputStream zipOutputStream;
private long tempFileLength;
@SneakyThrows
public FileSystemBackedArchiver() {
this(false);
}
/**
* Controls whether exceptions are re-thrown. Mostly meant for testing.
*
* @param rethrowExceptions If true exceptions caught when handling streams and files will be re-thrown.
*/
@SneakyThrows
FileSystemBackedArchiver(boolean rethrowExceptions) {
this.rethrowExceptions = rethrowExceptions;
tempFile = FileUtils.createTempFile("archive", ".zip");
zipOutputStream = new ZipOutputStream(new FileOutputStream(tempFile));
}
@ -50,11 +69,7 @@ public class FileSystemBackedArchiver implements AutoCloseable {
@SneakyThrows
public InputStream toInputStream() {
try {
zipOutputStream.close();
} catch (IOException e) {
log.debug(e.getMessage());
}
closeStreamAndStoreTempFileLength();
return new BufferedInputStream(new FileInputStream(tempFile));
}
@ -80,26 +95,50 @@ public class FileSystemBackedArchiver implements AutoCloseable {
@Override
public void close() {
closeStreamAndStoreTempFileLength();
try {
boolean res = tempFile.delete();
if (!res) {
log.warn("Failed to delete temp file");
Files.delete(tempFile.toPath());
} catch (IOException e) {
log.warn("Failed to delete temp-file", e);
if (rethrowExceptions) {
throw new RuntimeException(e);
}
zipOutputStream.close();
} catch (Exception e) {
log.debug("Failed to close FileSystemBackedArchiver");
}
}
public long getContentLength() {
closeStreamAndStoreTempFileLength();
return tempFileLength;
}
private void closeStreamAndStoreTempFileLength() {
try {
zipOutputStream.close();
} catch (IOException e) {
log.debug(e.getMessage());
log.warn("Failed to close temp-file stream", e);
if (rethrowExceptions) {
throw new RuntimeException(e);
}
}
if (tempFile.exists()) {
try {
var basicFileAttributes = Files.readAttributes(tempFile.toPath(), BasicFileAttributes.class);
tempFileLength = basicFileAttributes.size();
} catch (IOException e) {
if (rethrowExceptions) {
throw new RuntimeException(e);
}
}
} else {
log.warn("The temp file {} was deleted before it was completely processed", tempFile);
}
return tempFile.length();
}

View File

@ -41,7 +41,6 @@ spring:
max-attempts: 3
max-interval: 15000
prefetch: 1
liquibase:
change-log: classpath:/db/changelog/db.changelog-master.yaml
quartz:
@ -52,12 +51,13 @@ spring:
org:
quartz:
jobStore:
class: org.springframework.scheduling.quartz.LocalDataSourceJobStore
clusterCheckinInterval: 1000
isClustered: true
driverDelegateClass: org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
scheduler:
instanceId: AUTO
job-store-type: jdbc
job-store-type: JDBC
management:
endpoint:

View File

@ -1,6 +1,6 @@
------------------------------------------------------------------
| |
| File Management Service V1 Server |
| File Management Service V1 Server X |
| |
________________________________________________________________
| |

View File

@ -113,3 +113,5 @@ databaseChangeLog:
file: db/changelog/tenant/sql/43-add-applied-redaction-color.sql
- include:
file: db/changelog/tenant/sql/45-unique-dossier-name.sql
- include:
file: db/changelog/tenant/45-modify-section-length.yaml

View File

@ -0,0 +1,9 @@
databaseChangeLog:
- changeSet:
id: modify-section-length
author: aisvoran
changes:
- modifyDataType:
columnName: section
newDataType: VARCHAR(1024)
tableName: manual_legal_basis_change

View File

@ -74,7 +74,7 @@ public class FileTesterAndProvider {
var fileId = Base64.encodeBase64String((dossier.getId() + fileName).getBytes(StandardCharsets.UTF_8));
AddFileRequest upload = new AddFileRequest(fileName, fileId, dossier.getId(), "1");
AddFileRequest upload = new AddFileRequest(fileName, fileId, dossier.getId(), "1", 1);
fileManagementStorageService.storeObject(dossier.getId(), fileId, FileType.UNTOUCHED, new ByteArrayInputStream("test".getBytes(StandardCharsets.UTF_8)));
JSONPrimitive<String> uploadResult = uploadClient.upload(upload, false);

View File

@ -351,7 +351,7 @@ public class DictionaryTest extends AbstractPersistenceServerServiceTest {
var createdType = dictionaryClient.addType(type);
var word1 = "Luke Skywalker";
var word2 = "Anakin Skywalker";
var word2 = "anakin Skywalker";
var word3 = "Yoda";
// Act & Assert: Add different words; All three should exist
@ -376,6 +376,16 @@ public class DictionaryTest extends AbstractPersistenceServerServiceTest {
var existingEntries = dictionaryClient.getEntriesForType(createdType.getTypeId(), 0L, DictionaryEntryType.ENTRY);
assertThat(existingEntries.stream().filter(f -> !f.isDeleted()).count()).isEqualTo(5);
var dictionary = dictionaryClient.getDictionaryForType(createdType.getTypeId(), 0L);
var dictEntries = dictionary.getEntries();
assertThat(dictEntries).hasSize(5);
assertThat(dictEntries.get(0).getValue()).isEqualTo(word2);
assertThat(dictEntries.get(1).getValue()).isEqualTo(word1);
assertThat(dictEntries.get(2).getValue()).isEqualTo(word5);
assertThat(dictEntries.get(3).getValue()).isEqualTo(word4);
assertThat(dictEntries.get(4).getValue()).isEqualTo(word3);
}
}

View File

@ -5,11 +5,15 @@ import static org.assertj.core.api.Assertions.assertThat;
import java.io.ByteArrayInputStream;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.stream.Collectors;
import org.junit.Before;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.iqser.red.service.pdftron.redaction.v1.api.model.RedactionResultMessage;
import com.iqser.red.service.peristence.v1.server.integration.client.DossierClient;
import com.iqser.red.service.peristence.v1.server.integration.client.DownloadClient;
import com.iqser.red.service.peristence.v1.server.integration.client.FileClient;
@ -19,22 +23,35 @@ import com.iqser.red.service.peristence.v1.server.integration.service.DossierTes
import com.iqser.red.service.peristence.v1.server.integration.service.FileTesterAndProvider;
import com.iqser.red.service.peristence.v1.server.integration.utils.AbstractPersistenceServerServiceTest;
import com.iqser.red.service.peristence.v1.server.service.download.DownloadReportMessageReceiver;
import com.iqser.red.service.peristence.v1.server.service.download.RedactionResultMessageReceiver;
import com.iqser.red.service.persistence.management.v1.processor.utils.multitenancy.TenantContext;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.DossierTemplate;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.ReportTemplate;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.ReportTemplateUploadRequest;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.dossier.CreateOrUpdateDossierRequest;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.dossier.Dossier;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.dossier.file.FileModel;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.dossier.file.WorkflowStatus;
import com.iqser.red.service.persistence.service.v1.api.model.download.DownloadStatus;
import com.iqser.red.service.persistence.service.v1.api.model.download.DownloadStatusValue;
import com.iqser.red.service.persistence.service.v1.api.model.download.DownloadWithOptionRequest;
import com.iqser.red.service.redaction.report.v1.api.model.ReportResultMessage;
import com.iqser.red.service.redaction.report.v1.api.model.StoredFileInformation;
import com.iqser.red.storage.commons.service.StorageService;
import lombok.AccessLevel;
import lombok.SneakyThrows;
import lombok.experimental.FieldDefaults;
public class DownloadPreparationTest extends AbstractPersistenceServerServiceTest {
public static final String USER_ID = "1";
@Autowired
private DownloadReportMessageReceiver downloadReportMessageReceiver;
@Autowired
private RedactionResultMessageReceiver redactionResultMessageReceiver;
@Autowired
private StorageService storageService;
@ -60,76 +77,120 @@ public class DownloadPreparationTest extends AbstractPersistenceServerServiceTes
private FileClient fileClient;
@Before
public void before() {
TenantContext.setTenantId("redaction");
}
@Test
@SneakyThrows
public void testReceiveDownloadPackage() {
var dossierTemplate = dossierTemplateTesterAndProvider.provideTestTemplate();
var testData = new TestData();
var dossier = dossierTesterAndProvider.provideTestDossier(dossierTemplate);
var file = fileTesterAndProvider.testAndProvideFile(dossier);
fileClient.setStatusApproved(dossier.getId(), file.getId(), file.getUploader());
var file11 = fileClient.getFileStatus(dossier.getId(), file.getId());
fileClient.setStatusApproved(testData.dossier.getId(), testData.file.getId(), testData.file.getUploader());
var file11 = fileClient.getFileStatus(testData.dossier.getId(), testData.file.getId());
assertThat(file11.getWorkflowStatus()).isEqualTo(WorkflowStatus.APPROVED);
reportTemplateClient.uploadTemplate(ReportTemplateUploadRequest.builder()
.activeByDefault(true)
.dossierTemplateId(dossierTemplate.getId())
.multiFileReport(true)
.fileName("test.docx")
.template(new byte[]{1, 2, 3, 4})
.build());
var availableTemplates = reportTemplateClient.getAvailableReportTemplates(dossierTemplate.getId());
assertThat(availableTemplates).isNotEmpty();
dossierClient.updateDossier(CreateOrUpdateDossierRequest.builder()
.dossierName(dossier.getDossierName())
.description(dossier.getDescription())
.ownerId(dossier.getOwnerId())
.memberIds(dossier.getMemberIds())
.approverIds(dossier.getApproverIds())
.downloadFileTypes(dossier.getDownloadFileTypes())
.watermarkId(dossier.getWatermarkId())
.dueDate(dossier.getDueDate())
.dossierTemplateId(dossier.getDossierTemplateId())
.reportTemplateIds(availableTemplates.stream().map(ReportTemplate::getTemplateId).collect(Collectors.toList()))
.build(), dossier.getId());
var updatedDossier = dossierClient.getDossierById(dossier.getId(), false, false);
var updatedDossier = dossierClient.getDossierById(testData.dossier.getId(), false, false);
assertThat(updatedDossier.getReportTemplateIds()).isNotEmpty();
downloadClient.prepareDownload(DownloadWithOptionRequest.builder()
.userId("1")
.dossierId(dossier.getId())
.fileIds(Collections.singletonList(file.getId()))
.userId(USER_ID)
.dossierId(testData.dossier.getId())
.fileIds(Collections.singletonList(testData.file.getId()))
.redactionPreviewColor("#aaaaaa")
.build());
var statuses = downloadClient.getDownloadStatus("1");
var statuses = downloadClient.getDownloadStatus(USER_ID);
assertThat(statuses).isNotEmpty();
assertThat(statuses.iterator().next().getLastDownload()).isNull();
DownloadStatus firstStatus = statuses.get(0);
assertThat(firstStatus.getLastDownload()).isNull();
String downloadId = firstStatus.getStorageId();
addStoredFileInformationToStorage(testData.file, testData.availableTemplates, downloadId);
ReportResultMessage reportResultMessage = new ReportResultMessage();
reportResultMessage.setUserId(USER_ID);
reportResultMessage.setDownloadId(downloadId);
downloadReportMessageReceiver.receive(reportResultMessage);
redactionResultMessageReceiver.receive(RedactionResultMessage.builder()
.downloadId(downloadId)
.dossierId(testData.dossier.getId())
.redactionResultDetails(Collections.emptyList())
.build());
List<DownloadStatus> finalDownloadStatuses = downloadClient.getDownloadStatus(USER_ID);
assertThat(finalDownloadStatuses).hasSize(1);
DownloadStatus finalDownloadStatus = finalDownloadStatuses.get(0);
assertThat(finalDownloadStatus.getStatus()).isEqualTo(DownloadStatusValue.READY);
assertThat(finalDownloadStatus.getFileSize()).isGreaterThan(0);
}
@SneakyThrows
private void addStoredFileInformationToStorage(FileModel file, List<ReportTemplate> availableTemplates, String downloadId) {
var storedFileInformationstorageId = downloadId.substring(0, downloadId.length() - 3) + "/REPORT_INFO.json";
String reportStorageId = "XYZ";
// FIXME Check if this is still needed.
// This variable seems to do nothing, if it is not needed it can be removed.
var sivList = new ArrayList<StoredFileInformation>();
var siv = new StoredFileInformation();
siv.setFileId(file.getId());
siv.setStorageId("XYZ");
siv.setStorageId(reportStorageId);
siv.setTemplateId(availableTemplates.iterator().next().getTemplateId());
sivList.add(siv);
// FIXME Check if this is still needed.
storageService.storeObject("XYZ", new ByteArrayInputStream(new byte[]{1, 2, 3, 4}));
storageService.storeObject(storedFileInformationstorageId, new ByteArrayInputStream(new ObjectMapper().writeValueAsBytes(sivList)));
storageService.storeObject(reportStorageId, new ByteArrayInputStream(new byte[]{1, 2, 3, 4}));
}
ReportResultMessage reportResultMessage = new ReportResultMessage();
reportResultMessage.setUserId("1");
reportResultMessage.setDownloadId(statuses.iterator().next().getStorageId());
downloadReportMessageReceiver.receive(reportResultMessage);
@FieldDefaults(makeFinal = true, level = AccessLevel.PRIVATE)
private class TestData {
DossierTemplate dossierTemplate = dossierTemplateTesterAndProvider.provideTestTemplate();
Dossier dossier = dossierTesterAndProvider.provideTestDossier(dossierTemplate);
FileModel file = fileTesterAndProvider.testAndProvideFile(dossier);
List<ReportTemplate> availableTemplates;
private TestData() {
reportTemplateClient.uploadTemplate(ReportTemplateUploadRequest.builder()
.activeByDefault(true)
.dossierTemplateId(dossierTemplate.getId())
.multiFileReport(true)
.fileName("test.docx")
.template(new byte[]{1, 2, 3, 4})
.build());
availableTemplates = reportTemplateClient.getAvailableReportTemplates(dossierTemplate.getId());
assertThat(availableTemplates).isNotEmpty();
dossierClient.updateDossier(CreateOrUpdateDossierRequest.builder()
.dossierName(dossier.getDossierName())
.description(dossier.getDescription())
.ownerId(dossier.getOwnerId())
.memberIds(dossier.getMemberIds())
.approverIds(dossier.getApproverIds())
.downloadFileTypes(dossier.getDownloadFileTypes())
.watermarkId(dossier.getWatermarkId())
.dueDate(dossier.getDueDate())
.dossierTemplateId(dossier.getDossierTemplateId())
.reportTemplateIds(availableTemplates.stream().map(ReportTemplate::getTemplateId).collect(Collectors.toList()))
.build(), dossier.getId());
}
}
}

View File

@ -136,7 +136,7 @@ public class FileTest extends AbstractPersistenceServerServiceTest {
assertThat(viewedPages.size()).isEqualTo(1);
AddFileRequest upload = new AddFileRequest(filename, file.getId(), dossier.getId(), "1");
AddFileRequest upload = new AddFileRequest(filename, file.getId(), dossier.getId(), "1", 1);
JSONPrimitive<String> uploadResult = uploadClient.upload(upload, false);
loadedFile = fileClient.getFileStatus(dossier.getId(), uploadResult.getValue());
@ -174,7 +174,7 @@ public class FileTest extends AbstractPersistenceServerServiceTest {
assertThat(viewedPages).hasSize(1);
AddFileRequest upload = new AddFileRequest(filename, file.getId(), dossier.getId(), "1");
AddFileRequest upload = new AddFileRequest(filename, file.getId(), dossier.getId(), "1", 1);
JSONPrimitive<String> uploadResult = uploadClient.upload(upload, true);
loadedFile = fileClient.getFileStatus(dossier.getId(), uploadResult.getValue());

View File

@ -2,11 +2,10 @@ package com.iqser.red.service.peristence.v1.server.integration.tests;
import static org.assertj.core.api.Assertions.assertThat;
import java.time.Instant;
import java.time.OffsetDateTime;
import java.time.temporal.ChronoUnit;
import java.util.stream.Collectors;
import java.time.ZoneId;
import org.assertj.core.util.Lists;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
@ -15,7 +14,10 @@ import com.iqser.red.service.peristence.v1.server.integration.service.DossierTem
import com.iqser.red.service.peristence.v1.server.integration.service.DossierTesterAndProvider;
import com.iqser.red.service.peristence.v1.server.integration.service.FileTesterAndProvider;
import com.iqser.red.service.peristence.v1.server.integration.utils.AbstractPersistenceServerServiceTest;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.DossierRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.FileRepository;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.dossier.Dossier;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.dossier.file.FileModel;
import com.iqser.red.service.persistence.service.v1.api.model.license.LicenseReport;
import com.iqser.red.service.persistence.service.v1.api.model.license.LicenseReportRequest;
@ -35,44 +37,104 @@ public class LicenseReportTest extends AbstractPersistenceServerServiceTest {
@Autowired
private DossierTesterAndProvider dossierTesterAndProvider;
@Autowired
private FileRepository fileRepository;
@Autowired
private DossierRepository dossierRepository;
@Test
public void testLicenseReport() {
var template = dossierTemplateTesterAndProvider.provideTestTemplate();
var dossier1 = dossierTesterAndProvider.provideTestDossier(template, "Dossier1");
var lastYearDossier = dossierTesterAndProvider.provideTestDossier(template, "lastYearDossier");
addFileOnDate(lastYearDossier, "2022-01-01T10:00:00Z");
addFileOnDate(lastYearDossier, "2022-01-02T10:00:00Z");
var file1 = fileTesterAndProvider.testAndProvideFile(dossier1, "test1.pdf");
var file2 = fileTesterAndProvider.testAndProvideFile(dossier1, "test2.pdf");
var januaryDossier = dossierTesterAndProvider.provideTestDossier(template, "januaryDossier");
var januaryFile1 = addFileOnDate(januaryDossier, "2023-01-01T10:00:00Z");
addFileOnDate(januaryDossier, "2023-01-02T10:00:00Z");
var dossier2 = dossierTesterAndProvider.provideTestDossier(template, "Dossier2");
var februaryDossier = dossierTesterAndProvider.provideTestDossier(template, "februaryDossier");
addFileOnDate(februaryDossier, "2023-02-01T10:00:00Z");
addFileOnDate(februaryDossier, "2023-02-02T10:00:00Z");
var file3 = fileTesterAndProvider.testAndProvideFile(dossier2, "test3.pdf");
var file4 = fileTesterAndProvider.testAndProvideFile(dossier2, "test4.pdf");
var marchDossier = dossierTesterAndProvider.provideTestDossier(template, "marchDossier");
addFileOnDate(marchDossier, "2023-03-01T10:00:00Z");
addFileOnDate(marchDossier, "2023-03-02T10:00:00Z");
var dossiers = Lists.newArrayList(dossier1, dossier2);
var files = Lists.newArrayList(file1, file2, file3, file4);
var aprilDossier = dossierTesterAndProvider.provideTestDossier(template, "aprilDossier");
addFileOnDate(aprilDossier, "2023-04-01T10:00:00Z");
addFileOnDate(aprilDossier, "2023-04-02T10:00:00Z");
LicenseReportRequest request = new LicenseReportRequest();
request.setDossierIds(dossiers.stream().map(Dossier::getId).collect(Collectors.toList()));
softDeleteFile(januaryFile1.getId(), "2023-02-01T10:00:00Z");
var startDate = OffsetDateTime.now().minusHours(10).toInstant().truncatedTo(ChronoUnit.MILLIS);
request.setStartDate(startDate);
hardDeleteFile(januaryFile1.getId(), "2023-03-01T10:00:00Z");
var endDate = OffsetDateTime.now().plusHours(10).toInstant().truncatedTo(ChronoUnit.MILLIS);
request.setEndDate(endDate);
archiveDossier(februaryDossier.getId(), "2023-05-01T10:00:00Z");
String requestId = "123";
request.setRequestId(requestId);
LicenseReport licenseReport = licenseReportClient.getLicenseReport(LicenseReportRequest.builder()
.startDate(Instant.parse("2023-01-01T10:00:00Z"))
.endDate(Instant.parse("2023-05-01T11:00:00Z"))
.build());
LicenseReport licenseReport = licenseReportClient.getLicenseReport(request, 0, 20);
assertThat(licenseReport.getTotalFilesUploadedBytes()).isEqualTo(900L);
assertThat(licenseReport.getActiveFilesUploadedBytes()).isEqualTo(700L);
assertThat(licenseReport.getArchivedFilesUploadedBytes()).isEqualTo(200L);
assertThat(licenseReport.getTrashFilesUploadedBytes()).isEqualTo(0L);
assertThat(licenseReport.getNumberOfDossiers()).isEqualTo(dossiers.size());
assertThat(licenseReport.getNumberOfAnalyzedFiles()).isEqualTo(files.size());
assertThat(licenseReport.getRequestId()).isEqualTo(requestId);
assertThat(licenseReport.getStartDate()).isEqualTo(startDate);
assertThat(licenseReport.getEndDate()).isEqualTo(endDate);
assertThat(licenseReport.getMonthlyData().size()).isEqualTo(5);
assertThat(licenseReport.getMonthlyData().get(1).getTrashFilesUploadedBytes()).isEqualTo(100L);
assertThat(licenseReport.getMonthlyData().get(2).getTrashFilesUploadedBytes()).isEqualTo(0L);
}
private FileModel addFileOnDate(Dossier dossier, String date) {
var file = fileTesterAndProvider.testAndProvideFile(dossier, date + ".pdf");
setAdded(file.getId(), date);
return file;
}
private void setAdded(String fileId, String date) {
fileRepository.findById(fileId).ifPresent((file) -> {
file.setAdded(OffsetDateTime.ofInstant(Instant.parse(date), ZoneId.of("Z")));
file.setFileSize(100L);
fileRepository.saveAndFlush(file);
});
}
private void archiveDossier(String dossierId, String date) {
dossierRepository.findById(dossierId).ifPresent((dossier) -> {
dossier.setArchivedTime(OffsetDateTime.ofInstant(Instant.parse(date), ZoneId.of("Z")));
dossierRepository.saveAndFlush(dossier);
});
}
private void softDeleteFile(String fileId, String date) {
fileRepository.findById(fileId).ifPresent((file) -> {
file.setDeleted(OffsetDateTime.ofInstant(Instant.parse(date), ZoneId.of("Z")));
fileRepository.saveAndFlush(file);
});
}
private void hardDeleteFile(String fileId, String date) {
fileRepository.findById(fileId).ifPresent((file) -> {
file.setHardDeletedTime(OffsetDateTime.ofInstant(Instant.parse(date), ZoneId.of("Z")));
fileRepository.saveAndFlush(file);
});
}
}

View File

@ -0,0 +1,52 @@
package com.iqser.red.service.peristence.v1.server.integration.tests;
import static org.assertj.core.api.Assertions.assertThat;
import org.junit.Before;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import com.iqser.red.service.peristence.v1.server.integration.utils.AbstractPersistenceServerServiceTest;
import com.iqser.red.service.peristence.v1.server.integration.utils.MultithreadedTestRunner;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.NotificationPreferencesPersistenceService;
import com.iqser.red.service.persistence.management.v1.processor.utils.multitenancy.TenantContext;
import lombok.AccessLevel;
import lombok.SneakyThrows;
import lombok.experimental.FieldDefaults;
import lombok.extern.slf4j.Slf4j;
@Slf4j
@FieldDefaults(level = AccessLevel.PRIVATE)
public class NotificationPreferencesServiceTest extends AbstractPersistenceServerServiceTest {
@Autowired
NotificationPreferencesPersistenceService notificationPreferencesPersistenceService;
final MultithreadedTestRunner multithreadedTestRunner = new MultithreadedTestRunner(2, 1000);
@Before
public void setup() {
TenantContext.setTenantId("redaction");
}
@Test
@SneakyThrows
public void testNotificationPreferencesConcurrent() {
final String userId = "1";
Runnable test = () -> notificationPreferencesPersistenceService.getOrCreateNotificationPreferences(userId);
Runnable afterTest = () -> notificationPreferencesPersistenceService.deleteNotificationPreferences(userId);
var exceptions = multithreadedTestRunner.runMutlithreadedCollectingExceptions(test, afterTest);
for (Exception ex : exceptions) {
log.error("Exception during notification creation", ex);
}
assertThat(exceptions).isEmpty();
}
}

View File

@ -12,11 +12,17 @@ import org.springframework.beans.factory.annotation.Autowired;
import com.iqser.red.service.peristence.v1.server.integration.client.NotificationClient;
import com.iqser.red.service.peristence.v1.server.integration.client.NotificationPreferencesClient;
import com.iqser.red.service.peristence.v1.server.integration.utils.AbstractPersistenceServerServiceTest;
import com.iqser.red.service.peristence.v1.server.integration.utils.MultithreadedTestRunner;
import com.iqser.red.service.persistence.service.v1.api.model.audit.AddNotificationRequest;
import com.iqser.red.service.persistence.service.v1.api.model.common.JSONPrimitive;
import com.iqser.red.service.persistence.service.v1.api.model.notification.Notification;
import com.iqser.red.service.persistence.service.v1.api.model.notification.NotificationType;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
@SuppressWarnings("SpringJavaInjectionPointsAutowiringInspection")
@Slf4j
public class NotificationTest extends AbstractPersistenceServerServiceTest {
@Autowired
@ -25,6 +31,8 @@ public class NotificationTest extends AbstractPersistenceServerServiceTest {
@Autowired
private NotificationPreferencesClient notificationPreferencesClient;
private final MultithreadedTestRunner multithreadedTestRunner = new MultithreadedTestRunner(2, 1000);
@Test
public void testNotificationPreferences() {
@ -106,4 +114,38 @@ public class NotificationTest extends AbstractPersistenceServerServiceTest {
return currentNotifications.iterator().next();
}
@Test
@SneakyThrows
public void testNotificationPreferencesConcurrent() {
final String userId = "1";
Runnable test = () -> notificationPreferencesClient.getNotificationPreferences(userId);
Runnable afterTest = () -> notificationPreferencesClient.deleteNotificationPreferences(userId);
var exceptions = multithreadedTestRunner.runMutlithreadedCollectingExceptions(test, afterTest);
for (Exception ex : exceptions) {
log.error("Exception during notification creation", ex);
}
assertThat(exceptions).isEmpty();
}
@Test
@SneakyThrows
public void testNotificationsConcurrent() {
final String userId = "1";
Runnable test = () -> notificationClient.getNotifications(userId, false);
Runnable afterTest = () -> notificationPreferencesClient.deleteNotificationPreferences(userId);
var exceptions = multithreadedTestRunner.runMutlithreadedCollectingExceptions(test, afterTest);
for (Exception ex : exceptions) {
log.error("Exception during notification creation", ex);
}
assertThat(exceptions).isEmpty();
}
}

View File

@ -15,7 +15,7 @@ import com.iqser.red.service.peristence.v1.server.integration.service.DossierTem
import com.iqser.red.service.peristence.v1.server.integration.service.TypeProvider;
import com.iqser.red.service.peristence.v1.server.integration.utils.AbstractPersistenceServerServiceTest;
import com.iqser.red.service.persistence.management.v1.processor.entity.configuration.DictionaryEntryEntity;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.EntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry.EntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.utils.jdbc.JDBCWriteUtils;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.CloneDossierTemplateRequest;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.type.DictionaryEntryType;

View File

@ -54,9 +54,6 @@ import com.iqser.red.service.persistence.management.v1.processor.service.persist
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.DossierStatusRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.DossierTemplateRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.DownloadStatusRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.EntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.FalsePositiveEntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.FalseRecommendationEntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.FileAttributeConfigRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.FileAttributesGeneralConfigurationRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.FileAttributesRepository;
@ -76,6 +73,9 @@ import com.iqser.red.service.persistence.management.v1.processor.service.persist
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.TypeRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.ViewedPagesRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.WatermarkRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry.EntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry.FalsePositiveEntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.service.persistence.repository.dictionaryentry.FalseRecommendationEntryRepository;
import com.iqser.red.service.persistence.management.v1.processor.utils.multitenancy.TenantContext;
import com.iqser.red.service.persistence.service.v1.api.model.dossiertemplate.configuration.ApplicationConfig;
import com.iqser.red.service.persistence.service.v1.api.model.multitenancy.TenantRequest;
@ -182,7 +182,6 @@ public abstract class AbstractPersistenceServerServiceTest {
protected PrometheusMeterRegistry prometheusMeterRegistry;
@Before
public void setupOptimize() {

View File

@ -0,0 +1,71 @@
package com.iqser.red.service.peristence.v1.server.integration.utils;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import lombok.AccessLevel;
import lombok.RequiredArgsConstructor;
import lombok.experimental.FieldDefaults;
@RequiredArgsConstructor
@FieldDefaults(makeFinal = true, level = AccessLevel.PRIVATE)
public class MultithreadedTestRunner {
int numberOfThreads;
int numberOfExecutions;
public List<Exception> runMutlithreadedCollectingExceptions(boolean stopOnFirstRunWithExceptions, Runnable test, Runnable afterTest) {
List<Exception> allExceptions = new ArrayList<>();
for (int execution = 1; execution <= numberOfExecutions; execution++) {
var threads = new ArrayList<Thread>(numberOfThreads);
var exceptions = Collections.synchronizedList(new ArrayList<Exception>());
for (int threadNumber = 1; threadNumber <= numberOfThreads; threadNumber++) {
Thread t = new Thread(() -> {
try {
test.run();
} catch (Exception e) {
exceptions.add(e);
}
});
threads.add(t);
}
for (Thread t : threads) {
t.start();
}
for (Thread t : threads) {
try {
t.join();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
afterTest.run();
if (stopOnFirstRunWithExceptions) {
if (!exceptions.isEmpty()) {
return exceptions;
}
} else {
allExceptions.addAll(exceptions);
}
}
return allExceptions;
}
public List<Exception> runMutlithreadedCollectingExceptions(Runnable test, Runnable afterTest) {
return runMutlithreadedCollectingExceptions(true, test, afterTest);
}
}

View File

@ -1,57 +0,0 @@
package com.iqser.red.service.peristence.v1.server.utils;
import static org.assertj.core.api.Assertions.assertThat;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.ObjectOutputStream;
import java.util.SplittableRandom;
import org.apache.commons.io.IOUtils;
import org.junit.Test;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
@Slf4j
public class FileSystemBackArchiverTest {
@Test
@SneakyThrows
public void testFileSystemBackedArchiver() {
try (var fsba = new FileSystemBackedArchiver()) {
SplittableRandom sr = new SplittableRandom();
var data = sr.doubles().limit(1024 * 1024).toArray();
for (int i = 0; i < 10; i++) {
log.info("At entry: {}, using {}MB of memory", i, (Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory()) / (1024 * 1024));
ByteArrayOutputStream bos = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(bos);
oos.writeObject(data);
byte[] bytes = bos.toByteArray();
bos.close();
var entry = new FileSystemBackedArchiver.ArchiveModel("folder-" + i, "file-" + i, bytes);
fsba.addEntry(entry);
}
File f = File.createTempFile("test", ".zip");
var contentSize = fsba.getContentLength();
try (FileOutputStream fos = new FileOutputStream(f)) {
IOUtils.copy(fsba.toInputStream(), fos);
log.info("File: {}", f.getAbsolutePath());
assertThat(f.length()).isEqualTo(contentSize);
}
log.info("Total File Size: {}MB", f.length() / (1024 * 1024));
}
}
}

View File

@ -0,0 +1,108 @@
package com.iqser.red.service.peristence.v1.server.utils;
import static org.assertj.core.api.Assertions.assertThat;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.InputStream;
import java.io.ObjectOutputStream;
import java.nio.file.Files;
import java.nio.file.StandardCopyOption;
import java.util.SplittableRandom;
import org.junit.jupiter.api.Test;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
@Slf4j
public class FileSystemBackedArchiverTest {
private final static byte[] dummyFileContent = new byte[]{1, 2};
@Test
@SneakyThrows
public void testFileSystemBackedArchiver() {
try (var fileSystemBackedArchiver = new FileSystemBackedArchiver(true)) {
SplittableRandom sr = new SplittableRandom();
var data = sr.doubles().limit(1024 * 1024).toArray();
for (int i = 0; i < 10; i++) {
log.info("At entry: {}, using {}MB of memory", i, (Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory()) / (1024 * 1024));
try (ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos)) {
oos.writeObject(data);
byte[] bytes = bos.toByteArray();
var entry = new FileSystemBackedArchiver.ArchiveModel("folder-" + i, "file-" + i, bytes);
fileSystemBackedArchiver.addEntry(entry);
}
}
File tempFile = File.createTempFile("test", ".zip");
var contentSize = fileSystemBackedArchiver.getContentLength();
try (InputStream inputStream = fileSystemBackedArchiver.toInputStream()) {
Files.copy(inputStream, tempFile.toPath(), StandardCopyOption.REPLACE_EXISTING);
log.info("File: {}", tempFile.getAbsolutePath());
}
assertThat(tempFile.length()).isEqualTo(contentSize);
log.info("Total File Size: {}MB", tempFile.length() / (1024 * 1024));
Files.delete(tempFile.toPath());
}
}
@Test
public void testContentLengthForTwoEntries() {
try (var fileSystemBackedArchiver = new FileSystemBackedArchiver(true)) {
fileSystemBackedArchiver.addEntry(new FileSystemBackedArchiver.ArchiveModel("Original", "original", dummyFileContent));
fileSystemBackedArchiver.addEntry(new FileSystemBackedArchiver.ArchiveModel("Preview", "preview", dummyFileContent));
assertThat(fileSystemBackedArchiver.getContentLength()).isGreaterThan(0);
}
}
@Test
@SneakyThrows
public void testContentLengthForTwoEntriesAndStream() {
try (var fileSystemBackedArchiver = new FileSystemBackedArchiver(true)) {
fileSystemBackedArchiver.addEntry(new FileSystemBackedArchiver.ArchiveModel("Original", "original", dummyFileContent));
fileSystemBackedArchiver.addEntry(new FileSystemBackedArchiver.ArchiveModel("Preview", "preview", dummyFileContent));
try (InputStream stream = fileSystemBackedArchiver.toInputStream()) {
// Dummy statement to just have the code do something with the stream
//noinspection ResultOfMethodCallIgnored
stream.getClass();
}
assertThat(fileSystemBackedArchiver.getContentLength()).isGreaterThan(0);
}
}
@Test
public void testContentLengthForTwoEntriesWithClosing() {
// deliberately do not use try-with-resources to see if the content-length is available after temp file deletion
var fileSystemBackedArchiver = new FileSystemBackedArchiver(true);
fileSystemBackedArchiver.addEntry(new FileSystemBackedArchiver.ArchiveModel("Original", "original", dummyFileContent));
fileSystemBackedArchiver.addEntry(new FileSystemBackedArchiver.ArchiveModel("Preview", "preview", dummyFileContent));
fileSystemBackedArchiver.close();
assertThat(fileSystemBackedArchiver.getContentLength()).isGreaterThan(0);
}
}

View File

@ -13,7 +13,6 @@
<packaging>pom</packaging>
<modules>
<module>bamboo-specs</module>
<module>persistence-service-v1</module>
<module>persistence-service-image-v1</module>
</modules>