Data Types

Our workflow has a total variable with a value of 1481.47.

Calling the form-data REST API (e.g. /service/form/form-data?taskId=93) throws the following exception:

  "exception": "java.lang.Double cannot be cast to java.lang.String",
  "message": "Internal server error"

I am not casting the value to a string, so I think Activiti is storing the data internally as a Double.


I have no idea why Activiti cannot cast a Double to a String


I am using the double data type in the workflow, and it is accepted without any problem. Comparisons are working as long as both data types are a double e.g:

<activiti:formProperty id="total" name="Invoice Value" type="double" required="true"></activiti:formProperty>
<activiti:formProperty id="seniorLevelValue" type="double" required="true"></activiti:formProperty>

<conditionExpression xsi:type="tFormalExpression"><![CDATA[${total <= seniorLevelValue}]]></conditionExpression>

Issue #2

I am now getting:

"java.lang.Integer cannot be cast to java.lang.String"

Is this because my invoice number is a whole number, but is mapped to a string?


If your exclusive gateway is not behaving, check the following:

  1. Check you are using a gateway.

This example task does not use a gateway, so (I don’t think) the engine knows which route to take next:


Here is the same example task which has the required exclusive gateway:

  1. Check the gateway has a Default flow

In the XML file, the default gateway will look like this:



If the default flow is empty, then I don’t think the gateway works properly.


I was trying to set a group in the workflow:


And kept getting the Expression did not resolve to a string or collection of strings exception thrown:

ERROR org.activiti.engine.impl.interceptor.CommandContext
- Error while closing command context
Expression did not resolve to a string or collection of strings

To solve the issue, the group ID must be converted to a string:

# replace this
# result =
# with:
result = str(

Here is the diff:


Warning ref CPU usage above 90%:

sudo /opt/alfresco-community/libreoffice/scripts/ stop
sudo /opt/alfresco-community/libreoffice/scripts/ start

From soffice.bin using 100% of CPU




We use RabbitMQ (AMQP) when we deploy to Windows servers.

If you find Celery wants to use AMQP (amqp/, Connection refused), then check you created in your project (or example_appname) folder, and that your contains from .celery import app as celery_app. For more information, see Celery (using Redis) and Celery on Windows

Not Running Tasks

If you find Celery is not running tasks, try the following:

Open Task Manager on Windows and check you have a single instance of celery.exe running. I don’t know why (or even if) multiple instances cause a problem, but a single instance has got us processing tasks again.

If your task is in an app, check you are Using the @shared_task decorator

Windows Service

We have a Celery Windows Service. If it isn’t working, here are some things to try:

To see an error message, try running:

c:\kb\Python35\Lib\site-packages\win32\pythonservice.exe "Celery Worker"


This is the code which is run by HandleCommandLine in I used this to find the "pywintypes35.dll" is missing from your computer message which I fixed by running the next step in these notes.

Try installing as a global package using the installer. The installer runs a script called which copies DLL files to the correct locations.


Running pip install pypiwin32==219 doesn’t seem to run the script, so the service cannot find the DLL files that it needs!

To debug the service start-up, add ipdb to the code and then run:

python debug


If a cron script in /etc/cron.d has a . in the file name, then it will not run! (configs with dots in file name not working in /etc/cron.d)


I was getting SSL certificate verify failed errors when using devpi (which uses httpie and requests). To solve the issue, use devpi with a python 3 virtual environment (not python 2).


If you have a local PyPI server, and you do not want to use it, then comment out index-url in:


Django Compressor

I had an issue where relative images in css files were not being found e.g:


Django Compressor is supposed to convert relative URLs to absolute e.g:


The compress management command creates a manifest file listing the files it creates. On the web server this can be found in:


On Amazon S3 it is in the CACHE folder.

You can look at the manifest files to find the name of the generated CSS file and look in this file to see if the relative URLs are converted to absolute.

You can use the browser developer tools to see which CSS file is being used.

To solve the issue, I checked the generated CSS file and the links were not relative. I then ran compress and checked the generated CSS file again and the links were absolute. I re-started the Django project on the server and all was OK.


I also uninstalled django-storages-redux and reinstalled the old version: (git+

… but I don’t think that made a difference?!


When testing the scripts:

No protocol specified
!! (Qt:Fatal) QXcbConnection: Could not connect to display :0

To stop this error, use a headless connection i.e. ssh into the computer or use a separate console. This will still be an issue if you have a GUI and you sudo to a user who is not running a GUI.


If the backup server runs out of space:

  1. Lots of directories in /tmp called .dropbox-dist* (10Gb)

  2. Backup folder for the site had lots of .sql files from presumably failed backup (3Gb)

  3. Check the /home/web/tmp folder. Malcolm deleted this, which freed 1.8G of space!

  4. Check the /home/web/temp/ folder and track down large files:

    du -sh *
  5. You could also try (it didn’t free any space for me):

    rm -r /home/web/repo/files/dropbox/<site name>/Dropbox/.dropbox.cache/*



If you get this error:

No module named gio


apt-get install python-gobject-2


For version 6.x issues, see Update to version 6

Connection marked as dead

Errors from the ElasticSearch client saying:

%s marked as dead for %s seconds

The code can be seen here:

My thought is that the pyelasticsearch client is timing out when the cron task re-indexes the data (there are lots of records, so I would expect this to take some time). The connections are pooled, and time-out, so the connection is marked as dead.

To see if this is the problem (or not), I have added BATCH_SIZE and TIMEOUT to the settings:

    'default': {
        'BATCH_SIZE': 100,
        'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
        'INDEX_NAME': '{}'.format(SITE_NAME),
        'TIMEOUT': 60 * 5,
        'URL': '',

For documentation on these settings:


If you find Continuous Integration (CI) is running tests from other apps, then check the project setup.cfg file to make sure src is included in the norecursedirs section. For details, see Continuous Integration.



If you get 404 not found, then check you sudo service nginx reload.


If you get an error similar to this from salt highstate:

ID: letsencrypt-git
Function: git.latest
Result: False
Comment: Repository would be updated from 2434b4a to f0ebd13, but there are
uncommitted changes.

Log onto the affected server:

sudo -i
cd /opt/letsencrypt
git status
# checkout the file which is listed (in my case "letsencrypt-auto)
git checkout letsencrypt-auto
git status # should show nothing

init-letsencrypt - memory

virtual memory exhausted: Cannot allocate memory

Certbot has problem setting up the virtual environment.
Based on your pip output, the problem can likely be fixed by
increasing the available memory.

We solved the issue by creating a temporary swap file and then retrying the init-letsencrypt command:

sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

Check the swap status with:

sudo swapon -s


I can’t solve the issue, so referring to a colleague… For more information, see

init-letsencrypt - datetime

ImportError: No module named datetime

To solve the issue:

rm -r /home/patrick/.local/

SSL Stapling Ignored

Malcolm says that stapling needs a newer version of nginx.


502 Bad Gateway

This is a general error. Find the cause by looking in the following files:

sudo -i -u web
# check the files in:
tail -f ~/repo/uwsgi/log/hatherleigh_info.log

sudo -i
tail -f /var/log/nginx/error.log
# check the log files in:
tail -f /var/log/supervisor/

bind() to failed

nginx won’t start and /var/log/nginx/error.log shows:

[emerg]: bind() to failed (98: Address already in use)
[emerg] 15405#0: bind() to failed (98: Address already in use)

When I stopped the nginx service, I could still see the ports being used:

lsof -i :80
lsof -i :443

From bind() to failed, killing the users of the port, sorted the issue:

sudo fuser -k 80/tcp
sudo fuser -k 443/tcp


I am not over happy about this solution. But… I guess the processes were started somehow and had not been stopped?

failed (13: Permission denied) using sendfile

sendfile wasn’t working, and the following message appeared in /var/log/nginx/error.log:

2017/05/18 17:34:30 [error] 1835#1835: *315 open()
failed (13: Permission denied),
client:, server:,
request: "GET /dash/document/issue/version/3/download/ HTTP/1.1",
upstream: "uwsgi://", host: ""

The www-data user didn’t have permission to read the file. The permissions were -rw-------.

To solve the problem, add the following to your settings/ file:


Django will then create files with -rw-r--r-- permissions and all will be well.

For more information, see Django Media.

no python application found, check your startup logs for errors

This issue was missing environment variables e.g. NORECAPTCHA_SITE_KEY. I was running tail on the log file for the web application e.g. ~/repo/uwsgi/log/, and I think the error was further up the page (so use vim next time to check).

I found the error by trying to run a management command e.g. help and it showed the error. Added the missing environment variables to the vassal and all was fine.


From Failed to load PDF document, to fix it, I changed:

response['Content-Disposition'] = "attachment; filename={}".format(


response['Content-Disposition'] = "inline; filename={}".format(


Ubuntu 14.04 LTS


Check you have a backup of all databases on your development machine.

If you have upgraded from a previous version of Ubuntu running Postgres 9.1, you might need to completely remove the old version:

sudo apt-get purge postgresql-9.1

record app

If the conversion isn’t working, then it might be because LibreOffice is running on your Desktop.

Shut it down and the conversion should succeed. The error message will probably be: Cannot find converted file.



I took a long time trying to find the fix for this issue:

Jinja variable 'env' is undefined

I solved it by renaming the variable. I don’t know… but I think perhaps env is a reserved name in Salt?



For Ubuntu only…

On the master and minion, open the Firewall for Salt:

ufw allow salt


Getting a weird error (which I don’t really understand):

Cannot find a question for shared/accepted-oracle-license-v1-1

To solve the issue, I ran the following:

# # this showed the issue
# /bin/echo /usr/bin/debconf shared/accepted-oracle-license-v1-1 seen true  | /usr/bin/debconf-set-selections
error: Cannot find a question for shared/accepted-oracle-license-v1-1

# # to solve the issue
# /bin/echo /usr/bin/debconf shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections
# /bin/echo /usr/bin/debconf shared/accepted-oracle-license-v1-1 seen true  | /usr/bin/debconf-set-selections

Minion ID

To set the minion id:

# /etc/salt/minion
id: cloud-a

# re-start the minion and accept the key on the master
service salt-minion restart


Might be worth checking out this article instead of editing the minion id:

Jinja variable is undefined

Trying to add a variable to the context of a Jinja template:

Unable to manage file: Jinja variable 'backup' is undefined

I think the issue was the variable name. I tried backup and apple and they both failed. I renamed backup to enable_backup and it worked!


I couldn’t get virtualenv.managed working with python 3, so I ended up following the instructions in Using Salt with Python 3 and Pyvenv

Here is the working Salt state: virtualenv using pyvenv


If you have issues with Selenium and Firefox, then try the following:

pip install -U selenium

The following issue with chromedriver (v2.22):

File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/", lin 82, in quit
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/", line 97, in stop
  url_request.urlopen("" % self.port)
File "/usr/lib/python2.7/", line 154, in urlopen
  return, data, timeout)
File "/usr/lib/python2.7/", line 429, in open
  response = self._open(req, data)
File "/usr/lib/python2.7/", line 447, in _open
  '_open', req)
File "/usr/lib/python2.7/", line 407, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/", line 1228, in http_open
  return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/", line 1198, in do_open
  raise URLError(err)
urllib2.URLError: <urlopen error [Errno 111] Connection refused>

Was resolved by updating the version of chromedriver to v2.24.


The current version of Haystack has an issue with the

To temporarily fix the issue:

vim +67 haystack/backends/

Edit the code so that it matches the fixed version on GitHub i.e:

for field in model._meta.fields:


Clearing “System Problem Detected” messages

Sometimes historical “System Problem Detected” message re-appear when Ubuntu is started.

For example a problem with the chrome browser may not be reported to Ubuntu because the Chrome is not a supported package.

These message are from files stored in the /var/crash directory.

Investigate old crash messages

Change to the crash reporting directory as follows:

cd /var/crash

View the files in the directory as follows:

ls -al

Files that end with .crash are ascii files containing the crash report detail. You can view them with your favourite editor (e.g. vim, nano or gedit). Some crash reports are readable by root only so you may need to use sudo to be able to view them.

To use vim type:

sudo vim *.crash

To use nano type:

sudo nano *.crash

To use gedit type:

gksu gedit *.crash

You’ll be prompted for your password and on successful entry go to your editor

Delete historical crash messages

To delete historical crash messages type

sudo rm /var/crash/*

Any new crash messages that appear after that should be investigated.


It seems that a new cloud server using python 3 doesn’t install uwsgi correctly into the virtual environment.

Check the supervisor error log for uwsgi:


If you get the following:

exec: uwsgi: not found


sudo -i -u web
cd /home/web/repo/uwsgi
. venv_uwsgi/bin/activate
pip install uwsgi==2.0.1

The version of uwsgi can be found in


Issue with time when dual booting

To correctly synchronise the time in windows when dual booting start regedit and navigate to:


Right click anywhere in the right pane and choose New | DWORD (32-bit) Value. Name it:


then double click on it and give it a value of 1

see Incorrect Clock Settings in Windows When Dual-Booting with OS X or Linux