Hi
Does anyone use worksheet with a huge number of analyses on it? lets say 100…200…500?
Do you experience performance issues when adding result to the test, especially when it’s choice menu?
That seems like frontend issue for now.
–lt
Hi
Does anyone use worksheet with a huge number of analyses on it? lets say 100…200…500?
Do you experience performance issues when adding result to the test, especially when it’s choice menu?
That seems like frontend issue for now.
–lt
We experience same performance issue, and we have around 100 analyses and when every time add a sample, control reference or duplicate, it will cost around 5 mins waiting browser to response. And most of time we have 20 samples with 3 duplicates for each sample plus 10 QC samples, this will take 1-2 hours to create one worksheet. Our lab manager are burn with that and complain to me most of time.
Senate is very good open source software, we like it a lot except the performance.
Is there a plan on roadmap to improve the performance for development teams?
That does sound very slow. What is the spec of the machine it’s running on?
Our SENAITE running on 4/4 core cpu, and 8 GB ram liunx machine. And seems senaite only use one core (25% total usage), and memory utilization is around 2GB.
And I followed the blog How to improve Senaite performance? to tweak the configuration, but not get much better.
For SENAITE version 2.5 we improved already the performance by more than 60% for sample creation:
Furthermore, we have split huge transactions for transitions (e.g. submit, verify etc.) into smaller ones, which reduces the risk of database conflicts:
Please try therefore with the latest version and see if your situation improves with that.
This may not be viable depending on how you are hosting the site, but since Plone (and therefore Senaite) is a Python-based application, it doesn’t actually utilize multi-threading very efficiently. The ZEO configuration can get around this slightly, but only to extent that Plone allows the transaction to be sub-divided.
What we have found at our lab is that the actual CPU processor family can make a large performance difference. We swapped from a 2.8GHz Xeon processor to a 2.8GHz i7 processor, and measured up to an ~80% reduction in load times on some of the slower screens.
Also, since the i7 has 8 cores, we can still run a large number of ZEO clients on a single server. In my experience, the higher-core multitasking servers are ineffective at running a Plone/Senaite site since they often compromise raw cache speed for other task management features.
Thank you @faytrow for sharing your findings!
I wasn’t aware of this. How did you measure the performance increase?
Lately we used cProfile
in combination with pstats
:
import cProfile
import pstats
def your_function_to_profile():
# Your code here
cProfile.run("your_function_to_profile()", filename="profile_data.prof")
# Load the profile data
profile_data = pstats.Stats("profile_data.prof")
# Print the statistics to the console
profile_data.print_stats()
# Alternatively, you can sort the statistics by a specific metric and print
# the top N functions
profile_data.sort_stats('cumulative').print_stats(10)
Or if you like it visually, with snakeviz
pip install snakeviz
snakeviz profile_data.prof
Exciting to see the performance will be improved in 2.5.0, and is there a expected timeline when it will be release?
Within the following 2-3 weeks probably
(sorry for the late reply)
I hadn’t used snakeviz before, but that’s a neat tool. I appreciate the recommendation.
I didn’t trace down exactly where the slow down was happening in the Zope code, other than it was occurring during object creation in the Plone/Zope core creation. Most of our tests were run at the browser level using multiple devices, but on a local network computer (no internet traffic).
I do still have that old server, so I can see if I can profile why the two architectures might behave so differently, and how to best take advantage of both types.