Edit this page | Blame

Correlations time out

We are seeing errors with correlations which happen for the larger requests, such as

It looks like correlations don't finish - and worse the running processes take a significant chunk of RAM, around 10GB each. Eventually they disappear.


  • assigned: pjotrp, zachs, alexm
  • keywords: correlations, time out
  • type: bug
  • status: closed, completed
  • priority: critical


  • [X] Set up OOM killer
  • [X] Prevent many GN2 threads taking too much RAM
  • [ ] Disable URL messages


OOM Killer

To prevent future OOM killing I set the memory allocation in sysctl.conf to


The running kernel did not accept these, so it requires a reboot (later).

GN2 threads

One error I am seeing in the logs is

/export/local/home/zas1024/opt/genenetwork2_20210805/lib/python3.8/site-packages/scipy/stats/stats.py:3913: PearsonRConstantInputWarning: An input array is constant; the correlation coefficent is not defined.warnings.warn(PearsonRConstantInputWarning())

This is bleeding through to GN2 as a library which explains why it is not the GN3 server growing.

In all, the instability is probably caused be a computation going out of whack. What is worrisome is the amount of RAM all processes take. Python is not cleaning up. We should start by lowering gunicorn cleanup routines, such as --max-requests INT and --max-requests-jitter INT in


GN3 Threads

Currently the GN3 API is run with

env FLASK_DEBUG=1 FLASK_APP="main.py" flask run --port=8086

I added gunicorn for production

Log noise

In the logs we see also quite a bit of noise. Should disable that:

Werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.


Tested this issue, and there was no timeout. Closing this as completed.

(made with skribilo)