11 comments

  • hrimfaxi 15 hours ago ago

    Looked for a way to install/actually set up something.

    > Before configuring pgX, you need to set up PostgreSQL metrics collection:

    Click the link.

    > Prerequisites > PostgreSQL instance > Scout account and API credentials > Scout Collector installed and configured (see Quick Start)

    Multiple clicks to find out I need a separate account somewhere (wth is scout?). That's gonna be a no from me dawg.

    At least when places like Datadog do content marketing they provide ways to monitor the services using tools that don't require paying them money.

    • selcuka 11 hours ago ago

      > wth is scout?

      This is a feature of an observability product called Scout. It's not a standalone tool.

  • muteh 18 hours ago ago

    what does it do? the page doesn't even mention a product until near the end and then...doesn't explain?

    • rnjn 10 hours ago ago

      founder at base14 here, the company that is building Scout. Thanks for the feedback, I will work on bettering my messaging. Scout is an otel-native observability platform (data lake, UI, alerts, analytics, mcp, the works). We are building some specialised explorers (suffix 'X' for explorers) like pgX for postgres. Essentially we are building telemetry readers for components that send relevant metrics and logs through to a telemetry data lake. for each component/domain we find from experts what they look at for analysis and incidents, and bring that to a full stack "unified" dashboard. and we go beyond what a regular prometheus endpoint provides. thanks again.

    • chatmasta 17 hours ago ago

      At this point I’m closing posts after skimming the subheadings that haven’t even been changed from obvious LLM output.

  • PhilippGille 4 hours ago ago

    Naming conflict with pgx, a popular Postgres driver for Go: https://github.com/jackc/pgx

  • cronelius 11 hours ago ago

    There is a popular postgres client for Go called pgx. This naming will likely sow confusion

  • sublinear 18 hours ago ago

    > The engineer is forced into manual correlation: jumping between dashboards, aligning timelines by eye, [and] inferring causality from coincidence

    I just generate a random UUID in the application and make sure to log it everywhere across the entire stack along with a timestamp.

    Any old log aggregator can give me an accurate timeline grouped by request UUID across every backend component all in one dashboard.

    It's the very first thing that I have the application do when handling a request. It's injected it at the log handler level. There's nothing to break and nothing to think about.

    So, I have no problem knowing precise cause and effect with regard to all logs for a given isolated request, but I agree that there may be blips that affect multiple requests (outages, etc.). We have synthetic tests for outages though.

    I too am struggling to understand what this tool does beyond grouping all logs by a unique request identifier.

    • rnjn 9 hours ago ago

      founder at base14 here, the company that is building Scout. Thanks for the feedback. we do something similar for tracing as well, but pgX does a bit more than that - engineers should be able to trace (like you mention) and see and analyse the condition of the DB. for eg - correlate query slowdown to locks, vacuums etc. all on one screen, or a couple of clicks. We are building some specialised explorers like pgX for postgres. Essentially we are building telemetry readers for components that send relevant metrics and logs through to a telemetry data lake. for each component/domain we find from experts what they look at for analysis and incidents, and bring that to a full stack "unified" dashboards/mcp.

      Scout is our otel-native observability product (data lake, UI, alerts, analytics, mcp, the works). what we call pgX in the blog is an add-on to Scout.

    • iaaan 13 hours ago ago

      If you use OpenTelemetry, it basically does exactly that and you can send traces to some self-hosted FOSS visualizer, like Jaeger. You can also easily get the UUID of the spans/traces and have your logger automatically put them in every log message.

      • sublinear 12 hours ago ago

        I have no doubt there are many tools, but I specifically mentioned my solution because it doesn't require any tools at all and just a matter of log hygiene.

        They spend the whole page talking about a scenario that I've only seen happen in production when there were no app devs involved and people are allergic to writing a log format string let alone a single line of code.