Last time, we described the process of choosing VPS for Sylius – which factors to consider when choosing VPS for the Sylius-based eCommerce application. Also, we pointed out how various processors may influence the application performance. The fact is that continued investment in hardware development is not always recommended (and possible) due to budget constraints.

Quick jump to the section:

Despite what we have mentioned above, the following factors should be taken into consideration when preparing the application’s environment:

  • FastCGI Process Manager (FPM) settings’ optimization
  • Database setting optimization (in our case – MySQL database)
  • Doctrine ORM optimization

The article indicates some elements that you should take into account in the first phase. The following advice is only an option and should encourage you to learn more about it.

Sylius’ supervising

Before we jump to discuss the optimization of the above elements, let’s start with a more critical part – the server’s monitoring. To understand better what is going on with the application and how its traffic affects the hardware, you should implement some tools that will monitor on an ongoing basis and report parameters such as:

  • CPU load
  • RAM usage
  • The amount of memory used by key processes (Database, PHP-FPM)
  • The amount of simultaneous hard drives’ I/O operations 
  • The number of application’s requests in a given time
  • Web traffic (download/upload)
  • HTTP codes returned by endpoints
  • Individual subpages’ content
  • SSL certificate status (validity)


Having the data above, you may adjust the server settings to properly use the available hardware resources before deciding to increase or improve it.

You may postpone some of the recurring tasks for another part of the day to relieve the machine during peak traffic.

The awareness of the need to supervise applications is growing, which means that there are more and more different market tools for this purpose. However, in this article, we will focus on those that are available for free. Below are the three most common monitoring tools you may meet:


Munin

It is relatively the most straightforward software thanks to its comparatively simple installation and very clearly presented data. Its unquestionable advantage is the number of ready-made plugins that cover almost all application monitoring needs. There is also no problem with creating your extensions. Sample charts for CPU average loading time and hard disk operations during the day are shown below:

daily graph

daily graph


Nagios

The next presented tool is a bit more complicated than its predecessor. Its installation and initial launch require more knowledge and may seem more difficult in the initial phase. However, it provides more elaborate tools for informing about the server’s unwanted state applications and critical processes. An example dashboard showing the current state of the server is presented below.

You can quickly generate charts from the collected data. There are some plugins that extend the tool’s capabilities. You may also create them on your own with a very flexible interface.

Obraz zawierający stół  Opis wygenerowany automatycznie

Application’s demo is available here.


Zabbix

Last but not least presented in the article differs from the others with a more refined graphical interface with a greater possibility of adjusting it to the needs and more straightforward initial configuration. It also provides many built-in functions to monitor the server, so you don’t need to install additional plugins (as is the case with Nagios). The downside, however, is the lack of a simple interface for creating your own extensions, which is often a bottleneck for more demanding users. An example of the graphical interface is shown below.

Let’s go back to each component’ optimization.

PHP-FPM optimization

FPM settings’ config files are most commonly located under the following path (or in the similar to that one):

/etc/php/*/fpm/php-fpm.conf (“*” means the PHP version, eg. 7.4)

In the above file, you should first make sure that the following entries are included:

emergency_restart_threshold 10
emergency_restart_interval 1m
process_control_timeout 10s

The first two parameters point out that if at least 10 processes end up with an error in 1 minute, the main FPM’s process should restart automatically. It is a kind of fuse that protects against a complete interruption in access to the application.

The third parameter means that if a KILL command is encountered for the primary process, the sub-processes have 10 seconds to complete the current operations before forcing them to end. The majority of PHP processes last (or at least should last) much less than 10 seconds, so the above value can be considered safe.

Now let’s move on to the more performance-critical settings.

pm = dynamic
pm.max_children = 5
pm.start_servers = 3
pm.min_spare_servers = 2
pm.max_spare_servers = 4
pm.max_requests = 200

The first parameter sets the strategies for managing the number of sub-processes created by the main FPM process to handle incoming requests. Following the documentation, we have 3 options available: static, dynamic and ondemand.

  • Static – least used, causes that the number of dependent processes is constant and independent of the current requirements, which may extend the delivery time of the required response
  • Dynamic – most often used, keeps the number of dependent processes at a constant minimum level, ready to service the request, with the option to run additional backup processes in case of increased demand.
  • Ondemand – the least used, causes on-demand processes, which may lead to server overload and longer service of the request (due to the need to start the process)

The next parameters define:

  • the maximum number of child processes (for the static and dynamic options)
  • processes run on startup (for static and dynamic options)
  • minimal number of spare threads
  • maximum number of spare threads
  • the maximum number of serviced requests before restarting the process

The sizes of individual attributes are adjusted to the set memory_limit (in the php.ini file), available RAM, and other services on the server, for which you must provide the appropriate amount of resources.

Assuming that the pool uses 90MB on average, and your server has about 2000MB available (including other services), this parameter should be around 22 (2000/90 ~ 22.2).

MySQL Optimization

Optimizing MySQL comes down to setting your database’s parameters in a way appropriate to the available infrastructure. For this purpose, we recommend using two available tools that will show you which parameters require correction.

Since InnoDB is currently the most used storage engine in Sylius and other Symfony-based applications, this article will focus solely on it.

The tools listed above will help you choose the configuration for the following parameters (the most common configuration is done by modifying the my.cnf file):

  • innodb_data_file_path
  • innodb_buffer_pool_size
  • innodb_log_file_size
  • innodb_log_buffer_size
  • innodb_flush_log_at_trx_commit
  • innodb_lock_wait_timeout
  • innodb_flush_method

innodb_data_file_path

A parameter that specifies the name of the data file. It is important here to prevent unlimited growth of this file, which will hurt performance, so the documentation suggests the following configuration of this parameter:
“/ ibdata / ibdata1:50M:autoextend”

innodb_buffer_pool_size

It controls one of the InnoDB engine’s basic mechanisms – the pool of available buffer, i.e., RAM dedicated to keeping the database cache. It is recommended to set it at 50-80% of the available RAM (depending on the concurrent processes). 

innodb_log_file_size

The size of the file for logging operations on the database. The greater the value, the fewer I/O operations on the hard disk, which has a positive effect on performance while extending the time for data recovery in a sudden database shutdown. This parameter can be set at the level of 20-30% of the parameter above.

innodb_log_buffer_size

The size of the log’s write buffer to the file. The higher the number, the less often large queries and transactions will save the log before validating the query, saving IO operations on hard disks. 

innodb_flush_log_at_trx_commit

It controls a trade-off between a rigid approach to committing operations on a database and saving them in a log file.

3 values for this parameter are allowed: 0, 1 (default), 2.

If the parameter is set to 0, operations are saved in logs exactly every 1 second, which may result in their loss in the event of a sudden switch-off between cycles.

Parameter 1 is the default setting which means that the log entry goes after each committed transaction.

Parameter 2 means that the log will be saved after each transaction, but the save file is released every 1 second. As in the case of setting the parameter to 0, a sudden stop of the database may result in data loss.

innodb_lock_wait_timeout

This is the time of waiting for a row to be free in the event of multiple transactions overlapping. On default, set to 50 seconds. It should be adjusted depending on the application requirements. In the case of applications with a large data rotation, you should consider reducing this parameter.

innodb_flush_method

This parameter is a method of saving the log file. According to the official documentation, you have 6 options. For InnoDB engines, we recommend setting this parameter to “0_DIRECT”.

Doctrine ORM optimization

ORM is a tool that significantly speeds up the work and integration of the application with the database. However, it often brings the overhead that causes the extension of the response time to the query and thus the response generation.

Youcan improve Doctrine’s efficiency thanks to:

  • Enabling OPCache, which will eliminate the need to parse PHP files with each query.
  • Enabling the cache for metadata (entity and their relations definitions) and queries written in DQL.
  • If a given entity is only read in the application, it is worth setting it to “read-only”, it will cause that Doctrine will not track changes for this model.
  • Using a collection in “extra-lazy” mode, especially for very complex relations and large data sets.
  • Building queries appropriately by using indexes, attaching tables with relationships if you are sure that they will be used in a given place, arranging the “ON” and “WHERE” clauses in the correct order.
  • Resignation from extruding full objects into the view against simple types, for example, arrays.

If you have any questions related to the topics discussed above, do not hesitate to contact us.