Postgres, for instance, can easily spawn thousands of workers on a single host.Īdd to this the fact that familiar tools for process monitoring collect metrics frequently, and for good reason: processes move fast. We found that an average host runs about 100 processes, with significant variance depending on the software running. The problem with monitoring every process is one of cardinality. Without complete visibility at the process level, identifying the culprit that triggered the chain reaction is nearly impossible. This visibility is especially important when a particular process goes haywire, starving other processes of resources and bringing down hosts or entire distributed services. You can bounce the host and move on, but in order to prevent the issue from happening again, you need a deeper level of understanding and visibility.īy monitoring that host at the process level, you can see why the host is resource-constrained, and which piece of software is causing the issue. But often, after drilling down, you find that some system resource is saturated on a host. We already help monitor your infrastructure and applications with our more thanĥ00 integrations, which faithfully collect work and resource metrics from your systems.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |