DELETE on large resource counts

FHIR version: STU3
Server implementation: HAPI FHIR v6.4.0

We are having issues deleting large amounts of resources. This is very slow and tends to crash a lot.

Currently, we are using a multi-tenant server setting in which we are trying to delete resource counts up to 1.5 million resources. Our current implementation uses a ?_cascade=delete on the Organization Resource, which crashes the server.

We tried using the ?_expunge=true as well. This responded with a 200 OK and a scheduled job. However this seemed to crash in the server when executing the scheduled jobs as well.

Does anyone have any suggestions or links/information about dealing with those issues?

Kind Regards, Gijs

Not sure, if you have enabled delete settings just now in the application. Delete required indexing within the application.
Enable delete/ cascade delete, expunge settings and assign proper roles to user.
Run reindexing, then try deleting your resources

I checked the settings, we indeed had one missing that got lost when upgrading to v6.4.0 HAPI FHIR. We do have all the settings of deleting, cascading deletes and expunges set to true and the correct roles there as well.

I will run the reindexing command before the delete. Did hear that this can take long times on large datasets, but it might be worth a try.

Thanks for the reply!