Will Amazon S3 Vectors kill vector databases or save them?

(zilliz.com)

269 points | by Fendy a day ago ago

121 comments

  • simonw a day ago ago

    This is a good article and seems well balanced despite being written by someone with a product that directly competes with Amazon S3. I particularly appreciated their attempt to reverse-engineer how S3 Vectors work, including this detail:

    > Filtering looks to be applied after coarse retrieval. That keeps the index unified and simple, but it struggles with complex conditions. In our tests, when we deleted 50% of data, TopK queries requesting 20 results returned only 15—classic signs of a post-filter pipeline.

    Things like this are why I'd much prefer if Amazon provided detailed documentation of how their stuff works, rather than leaving it to the development community to poke around and derive those details independently.

    • libraryofbabel a day ago ago

      > Things like this are why I'd much prefer if Amazon provided detailed documentation of how their stuff works, rather than leaving it to the development community to poke around and derive those details independently.

      Absolutely this. So much engineering time has been wasted on reverse-engineering internal details of things in AWS that could be easily documented. I once spent a couple days empirically determining how exactly cross-AZ least-outstanding-requests load balancing worked with AWS's ALB because the docs didn't tell me. Reverse-engineering can be fun (or at least I kinda enjoy it) but it's not a good use of our time and is one of those shadow costs of using the Cloud.

      It's not like there's some secret sauce here in most of these implementation details (there aren't that many ways to design a load balancer). If there was, I'd understand not telling us. This is probably less an Apple-style culture of secrecy and more laziness and a belief that important details have been abstracted away from us users because "The Cloud" when in fact, these details do really matter for performance and other design decisions we have to make.

      • TheSoftwareGuy a day ago ago

        >It's not like there's some secret sauce here in most of these implementation details. If there was, I'd understand not telling us. This is probably less an Apple-style culture of secrecy and more laziness and a belief that important details have been abstracted away from us users because "The Cloud" when in fact, these details do really matter for performance and other design decisions we have to make.

        Having worked inside AWS I can tell you one big reason is the attitude/fear that anything we put in out public docs may end up getting relied on by customers. If customers rely on the implementation to work in a specific way, then changing that detail requires a LOT more work to prevent breaking customer's workloads. If it is even possible at that point.

        • wubrr a day ago ago

          Right now, it is basically impossible to reliably build full applications with things like DynamoDB (among other AWS products), without relying on internal behaviour which isn't explicitly documented.

          • cbsmith 20 hours ago ago

            I've built several DynamoDB apps, and while you might have some expectations of internal behaviour, you can build apps that are pretty resilient to change of the internal behaviour but rely heavily on the documented behaviour. I actually find the extent of the opacity a helpful guide on the limitations of the service.

          • mannyv 16 hours ago ago

            Totally incorrect for Dynamo.

            It was probably correct for Cognito 1.0.

          • JustExAWS a day ago ago

            I am also a former AWS employee. What non public information did you need for DDB?

            • tracker1 21 hours ago ago

              Try ingesting the a complete WHOIS dump into DDB sometime. This was before autoscaling worked at all when I tried... but it absolutely wasn't anything one can consider fun.

              In the end, after multiple implementations, finally had to use a Java Spring app on a server with a LOT of ram just to buffer the CSV reads without blowing up on the pushback from DDB. I think the company spent over $20k in the couple months on different efforts in a couple different languages (C#/.Net, Node.js, Java) across a couple different routes (multiple queues, lambda, etc) just to get the initial data ingestion working a first time.

              The Node.js implementation was fastest, but would always blow up a few days in without the ability to catch with a debugger attached. The queues and lambda experiments had throttling issues similar to the DynamoDB ingestion itself, even with the knobs turned all the way up. I don't recall what the issue with the .Net implementation was at the time, but it blew up differently.

              I don't recall all the details, and tbh I shouldn't care, but it would have been nice if there was some extra guidance of trying to take in a few gb of csv into DynamoDB at the time. To this day, I still hate ETL work.

              • everfrustrated 12 hours ago ago

                Why would you expect an OLTP db like DDB to work for ETL? You'd have the same problems if you used Postgres.

                It's not like AWS is short on ETL technologies to use...

                • scarface_74 5 hours ago ago

                  Even in an OlTP db, there is often a need to bulk import and export data. AWS has methods in most supported data stores - ElasticSearch, DDB, MySQL, Aurora, Redshift, etc to bulk insert from S3.

              • JustExAWS 21 hours ago ago
                • tracker1 21 hours ago ago

                  Cool... though that would make it difficult to get the hundred or so CSVs into a single table, since it isn't supported I guess stitching them before processing would be easy enough... also, no idea when that feature became available.

                  • JustExAWS 20 hours ago ago

                    It’s never been a good idea to batch ingest a lot of little single files using any ETL process on AWS, whether it be DDB, Aurora MySQL/Postgres using “load data from S3…”, Redshift batch import from S3, or just using Athena (yeah I’ve done all of them).

            • cyberax 16 hours ago ago

              A tool to look at hot partitions, for one thing.

              • JustExAWS 7 hours ago ago
                • cyberax 2 hours ago ago

                  The keyword here is "should" :) Back then DynamoDB also had a problem with scaling the data can be easily split into partitions, but it's never merged back into fewer partitions.

                  So if you scaled up and then down, you might have ended with a lot of partitions that got only a few IOPS quota each. It's better now with burst IOPS, but it still is a problem sometimes.

        • libraryofbabel a day ago ago

          And yet "Hyrum's Law" famously says people will come to rely on features of your system anyway, even if they are undocumented. So I'm not convinced this is really customer-centric, it's more AWS being able to say: hey sorry this change broke things for you, but you were relying on an internal detail. I do think there is a better option here where there are important details that are published but with a "this is subject to change at any time" warning slapped on them. Otherwise, like OP says, customers just have to figure it all out on their own.

          • lazide a day ago ago

            Sure, but the court isn’t going to consider hyrum’s law in a tort claim, but might consider AWS documentation - even with a disclaimer - with more weight.

            Rely on undocumented behavior at your own risk.

            • vlovich123 a day ago ago

              Has Amazon ever been taken to court for things like this? I really don't think this is a legal concern.

              • teaearlgraycold a day ago ago

                I don't buy the legal angle. But if I was an overworked Amazon SWE I'd also like to avoid the work of documentation and a proper migration the next time implementation is changed.

              • lazide a day ago ago

                Amazon is involved in so many lawsuits right now, I honestly can’t tell. I did some google searches and gave up after 5+ pages.

        • thiagowfx 12 hours ago ago
        • simonw a day ago ago

          Thanks for this, that's a really insightful comment.

        • UltraSane 14 hours ago ago

          Just add an option to re-enable spacebar heating.

        • scarface_74 17 hours ago ago

          You have been quoted Simon Willison on his blog - his blog is popular on HN.

          https://simonwillison.net/2025/Sep/8/thesoftwareguy/#atom-ev...

      • BobbyJo 5 hours ago ago

        > This is probably less an Apple-style culture of secrecy and more laziness and a belief that important details have been abstracted away from us users

        As someone who had worked in providing infra to third parties, I can say that providing more detail than necessary will hurt your chances with some bigger customers. Giving them more information than they need or ask for makes your product look more complicated.

        However sophisticated you think a customer of this product will be, go lower.

      • kenhwang 16 hours ago ago

        Did you have an account manager or support contract with AWS? IME, they're more than willing to set up a call with one of their engineers to disclose implementation details like this after your company signs an NDA.

      • yupyupyups 3 hours ago ago

        >So much engineering time has been wasted on reverse-engineering internal details of things

        It feels that this true for proprietary software in general.

      • javier2 21 hours ago ago

        Its likely not specified, because they want to keep their right to improve or change it later. Documenting too detailed leads to way harder changes

      • ithkuil 13 hours ago ago

        OTOH once you document something you need to do more work when you change the behaviour

      • whakim a day ago ago

        > It's not like there's some secret sauce here in most of these implementation details.

        IME the implementation of ANN + metadata filtering is often the "secret sauce" behind many vector database implementations.

      • citizenpaul a day ago ago

        I have to assume that at this point its either intentional(increases profits?) or because AWS doesn't truly understand their own systems due to the culture of the company.

        • messe a day ago ago

          > because AWS doesn't truly understand their own systems due to the culture of the company.

          This. There's a lot of freedom in how teams operate. Some teams have great internal documentation, others don't, and a lot of it is scattered across the internal Amazon wiki. I recall having to reach out on slack on multiple occasions to figure out how certain systems worked after diving through docs and the relevant issue trackers didn't make it clear.

        • cyberax a day ago ago

          AWS also has a pretty diverse set of hardware, and often several generations of software running in parallel. Usually because the new generation does not quite support 100% of features from the previous generation.

    • alanwli a day ago ago

      The alternative is to find solutions that can reasonably support different requirements because business needs change all the time especially in the current state of our industry. From what I’ve seen, OSS Postgres/pgvector can adequately support a wide variety of requirements for millions to low tens of millions of vectors - low latencies, hybrid search, filtered search, ability to serve out of memory and disk, strong-consistency/transactional semantics with operational data. For further scaling/performance (1B+ vectors and even lower latencies), consider SOTA Postgres system like AlloyDB with AlloyDB ScaNN.

      Full disclosure: I founded ScaNN in GCP databases and am the lead for AlloyDB Semantic Search. And all these opinions are my own.

    • speedysurfer a day ago ago

      And what if they change their internal implementation and your code depends on the old architecture? It's good practice to clearly think about what to expose to users of your service.

      • altcognito a day ago ago

        Knowing how the service will handle certain workloads is an important aspect of choosing an architecture.

      • libraryofbabel a day ago ago

        If you can truly abstract away an internal detail, then great. But often there are design decisions that you cannot abstract away because they affect e.g. performance in a major way. For example, I don't care whether some AWS service is written in Java or Go or C++. I do care a bit about how its indexing and retrieval works, because I need to know that to plan my query workloads.

        I actually think AWS did a reasonably good job of this with DynamoDB. Most of the performance tradeoffs, indexing etc., is pretty clear if you ready enough docs without exposing a ton of unnecessary internals.

    • tw04 17 hours ago ago

      Detailed documentation would allow for a fair comparison of competing products. Opaque documentation allows AWS to sell "business value" to upper management while proclaiming anyone asking for more detail isn't focused on what's important.

    • apwell23 10 hours ago ago

      That would increase surface area of the abstraction they are trying to expose. This is not a case of failure to document.

      One should only "poke around" an abstraction like this for fun and curiosity and not with intention of putting the finding to real use.

  • redskyluan a day ago ago

    Author of this article.

    Yes, I’m the founder and maintainer of the Milvus project, and also a big fan of many AWS projects, including S3, Lambda, and Aurora. Personally, I don’t consider S3Vector to be among the best products in the S3 ecosystem, though I was impressed by its excellent latency control. It’s not particularly fast, nor is it feature-rich, but it seems to embody S3’s design philosophy: being “good enough” for certain scenarios.

    In contrast, the products I’ve built usually push for extreme scalability and high performance. Beyond Milvus, I’ve also been deeply involved in the development of HBase and Oracle products. I hope more people will dive into the underlying implementation of S3Vector—this kind of discussion could greatly benefit both the search and storage communities and accelerate their growth.

    • redskyluan a day ago ago

      By the way, if you’re not fully satisfied with S3Vector’s write, query, or recall performance, I’d encourage you to take a look at what we’ve built with Zilliz Cloud. It may not always be the lowest-cost option, but it will definitely meet your expectations when it comes to latency and recall.

    • pradn a day ago ago

      Thanks for writing a balanced article - much easier to take your arguments seriously! And a sign of expertise.

    • Shakahs a day ago ago

      While your technical analysis is excellent, making judgements about workload suitability based on a Preview release is premature. Preview services have historically had significantly lower performance quotas than GA releases. Lambda for example was limited to 50 concurrent executions during Preview, raised to 100 at GA, and now the default limit is 1,000.

  • jhhh 17 hours ago ago

    "That gap isn’t just theoretical—it shows up in real bills."

    "That’s not linear growth—it’s a quantum leap"

    "The performance and recall were fantastic—but the costs were brutal"

    "it’s not a one-size-fits-all solution—it’s the right tool for the right job."

    "S3 Vectors is excellent for cold, cheap, low-QPS scenarios—but it’s not the engine you want to power a recommendation system"

    "S3 Vectors doesn’t spell the end of vector databases—it confirms something many of us have been seeing for a while"

    "that’s proof positive that vector storage is a real necessity—not just “indexes wrapped in a database."

    "the vector database market isn’t being disrupted—it’s maturing into a tiered ecosystem where different solutions serve different performance and cost needs"

    "The golden age of vector databases isn’t over—it’s just beginning."

    "The bigger point is that Milvus is evolving into a system that’s not only efficient and scalable, but AI-native at its core—purpose-built for how modern applications actually work."

  • qaq a day ago ago

    "I recently spoke with the CTO of a popular AI note-taking app who told me something surprising: they spend twice as much on vector search as they do on OpenAI API calls. Think about that for a second. Running the retrieval layer costs them more than paying for the LLM itself. That flips the usual assumption on its head." Hmm well start sending full documents as part of context see it flip back :).

    • heywoods a day ago ago

      Egress costs? I’m really surprised by this. Thanks for sharing.

      • qaq a day ago ago

        Sry maybe should've being more clear it was a sarcastic remark. The whole point of doing vector db search is to feed LLM with very targeted context so you can save $ on API calls to LLM.

        • infecto a day ago ago

          That’s not the whole point it’s in the intersection of reducing tokens sent but also getting search both specific and generic enough to capture the correct context data.

          • j45 a day ago ago

            It's possible to create linking documents between the documents to help smooth out things in some cases.

      • andreasgl 16 hours ago ago

        They’re likely using an HNSW index, which typically requires a lot of memory for large data sets.

    • dahcryn 7 hours ago ago

      if they use AzureSearch, I fully understand it. Those things are hella expensive

    • a day ago ago
      [deleted]
  • physicsguy 10 hours ago ago

    The biggest killer of vector dbs is that normal DBs can easily store embeddings, and the vector DBs just don’t then offer enough of a differentiator to be a separate product.

    We found our application was very sensitive to context aware chunking too. You don’t really get control of that in many tools.

  • scosman a day ago ago

    Anyone interested in this space should look at https://turbopuffer.com - I think they were first to market with S3 backed vector storage, and a good memory cache in front of it.

    • k9294 10 hours ago ago

      Turbopuffer is awesome, really recommend it. Also they have extra features like automatic recall tuning based on you data, option to choose read after write guarantees (trading latency for consistency or vice versa), BM25 search, filtering on the filed and many more.

      Really recommend to check them out if you need a vector DB. I tried qdrant and zilli cloud solutions and in terms of operational simplicity turbopuffer just killing it.

      https://turbopuffer.com/docs/query

    • nosequel a day ago ago

      Turbopuffer was mentioned in the article.

  • iknownothow 10 hours ago ago

    S3 has much bigger fish in its sight than the measely vector db space. If you see the subtle improvements in features of S3 in recent years, it is clear as day, at least to me, that they're going after the whale that is Databricks. And they're doing it the best way possible - slowly and silently eating away at their moat.

    AWS Athena hasn't received as much love for some reason. In the next two years I expect major updates and/or improvements. They should kill off Redshift.

    • antonvs 9 hours ago ago

      > … going after the whale that is Databricks.

      Databricks is tiny compared to AWS, maybe 1/50th the revenue. But they’re both chasing a big and fast-growing market. I don’t think it’s so much that AWS is going after Databricks as that Databricks happens to be in a market that AWS is interested in.

      • iknownothow 7 hours ago ago

        I agree, Databricks is one of many in the space. If S3 makes Databricks redundant, then they also make others like Databricks redundant too.

  • cpursley a day ago ago

    Postgres has pgvector. Postgres is where all of my data already lives. It’s all open source and runs anywhere. What am I missing with the specialty vector stores?

    • CuriouslyC a day ago ago

      latency, actual retrieval performance, integrated pipelines that do more than just vector search to produce better results, the list goes on.

      Postgres for vector search is fine for toy products or stuff that's outside the hot loop of your business but for high performance applications it's just inadequate.

      • cpursley a day ago ago

        For the vast majority of applications, the trade off is worth keeping everything in Postgres vs operational overhead of some VC hype data store that won’t be around in 5 years. Most people learned this lesson with Mongo (postgrest jsonb is now good enough for 90% of scenarios).

        • CuriouslyC a day ago ago

          I'm a legit postgres fanboy, my comment history will back this up, but the ops overhead and performance implications of trying to run PGvector as your core vector store for everything is just silly, you're going to be doing all sorts of postgres replication gymnastics to make up for the fact that you're using the wrong tool for the job. It's good for prototyping and small/non-core workloads, use it outside that scope at your own peril.

          • alastairr a day ago ago

            Interested to hear any more on this. I've been using pinecone for ages, but they recently increased the cost floor for serverless. I've been thinking of moving everything to pgvector (1M ish, so not loads), as all the bigger meta data lives there anyway. But I'd be interested to hear any views on that.

            • CuriouslyC a day ago ago

              It depends on your flow honestly. If you're just using your vectors for where filters on domain objects and you don't have hundreds of millions of vectors PGVec is fine. If you have any sort of workflow where you need low latency access to vectors and reliable random read performance, or where vector work is the bottleneck on performance, PGVec goes tits up.

            • whakim a day ago ago

              At 1M embeddings I'd think pgvector would do just fine assuming a sufficiently powerful database.

          • cpursley a day ago ago

            Guess I'm just not webscale™

          • j45 a day ago ago

            Appreciate the clarification. I have been using it for small / medium things and it's been OK.

            The everything postgres as long as reasonably possible approach is fun, but not something I expect to last for ever.

        • whakim a day ago ago

          It depends on scale. If you're storing a small number of embeddings (hundreds of thousands, millions) and don't have complicated filters, then absolutely the convenience factor of pgvector will win out. Beyond that, you'll need something more powerful. I do think the dedicated vector stores serve a useful place in the market in that they're extremely "managed" - it is really really easy to just call an API and never worry about pre- or post- filtering or sharding your index across a large cluster. But they also have weaknesses in that they're usually optimized around small(er) scale where the bulk of their customers lie, and they don't really replace an actual search system like ElasticSearch.

        • cpursley a day ago ago

          Also, no way retrieval performance is going to match pgvector because you still have to join the external vector with your domain data in the main database at the application level, which is always going to be less performant.

          • jitl a day ago ago

            i'll take a 100ms turbopuffer vector search plus a 50ms postgres-select-where-id-in over a 500ms all-in-one pgvector + join query.

            When you only need to hydrate like 30 search result item IDs from Postgres or memcached i don't see the join being "too expensive" to do in memory.

          • CuriouslyC a day ago ago

            For a large class of applications, the database join is the last step of a very involved pipeline that demands a lot more performance than PGVector can deliver. There are also a large class of applications that don't even interface with the database directly, except to emit logging/traceability artifacts.

  • conradev a day ago ago

      At a glance, it looks like a lightweight vector database running on top of low-cost object storage—at a price point that is clearly attractive compared to many dedicated vector database solutions.
    
    They also didn’t mention LanceDB, which fits this description but with an open source component: https://lancedb.github.io/lancedb/
    • kjfarm a day ago ago

      This may be because LanceDB is the most attractive with a price point of standard S3 storage ($0.023/GB vs $0.06/GB). I also like that Lancedb works with S3 compatible stores, such as Backblaze B2 which is even cheaper (~70% cheaper).

    • nickpadge a day ago ago

      I love lancedb. It’s the only way I’ve found to performantly and cheaply serve 50m+ records of 768 dimensions. Runs on s3 a bit too slow, but on EFS can still be a few hundred millis.

    • factsaresacred 20 hours ago ago

      For low cost, there's also Cloudflare Vectorize ($0.05 per 100 million stored vectors), which nobody seems to know exists: https://www.cloudflare.com/developer-platform/products/vecto...

  • janalsncm a day ago ago

    S3 vectors has a topK limit of 30, and if you add filters it may be less than that. So if you need something with higher topK you’ll need to 1) look elsewhere or 2) shard your dataset into N shards to get NxK results, which you query in parallel and merge afterwards.

    I also didn’t see any latency info on their docs page https://docs.aws.amazon.com/AmazonS3/latest/API/API_S3Vector...

    • mediaman a day ago ago

      And a topk of 30 also means reranking of any sort is out, except for maybe limited reranking of 30->10, but that seems kind of pointless with today’s LLMs that can handle a bit more context.

      • janalsncm a day ago ago

        Yeah exactly, so you could do something like shard by the first 4 bits of md5 of the text (gives you 16 buckets) but now you’re adding extra complexity to work around their limitations.

    • catlifeonmars 15 hours ago ago

      3) ask TAM for a service quota increase

  • softwaredoug 18 hours ago ago

    I’m not sure S3 vectors is a true vector database/search engine in the way something like Elasticsearch, Turbopuffer or Milvus is. It’s more a convenient building block for simple high scale retrieval.

    I think of a search system doing quite a lot from sparse/lexical/hybrid search, metadata filtering, numerical ranking (recency/popularity/etc), geo, fuzzy, and whatever other indices at its core. These are building blocks for getting initial candidates.

    Then you need to be able to combine all these into one result set for your users - usually with a query DSL where you can express a ranking function. Then there’s usually ancillary features that come up (highlighting, aggregations, etc).

    So while S3 vectors is a fascinating primitive, I’m not sure I’d reach for it outside specific circumstances.

  • turing_complete 13 hours ago ago

    Since when was everything no longer "announced" or "released", but "dropped"? Is this an LLMism?

  • storus a day ago ago

    Does this support hybrid search (dense + sparse embeddings)? Pure dense embeddings aren't that great for specific search, they only hit meaning reliably. Amazon's own embeddings also aren't SOTA.

    • danielcampos93 a day ago ago

      I think you would be very surprised by the number of customers who don't care if the embeddings are SOTA. For every Joe who wants to talk GraphRAG + MTEB + CMTEB and adaptive rag there are 50 who just want whatever IT/prodsec has approved

    • infecto a day ago ago

      That’s where my mind was rolling and also if not, can this be used in OpenSearch hybrid search?

  • hbcondo714 a day ago ago

    It would be great to have the vector database run on the edge / on-device for offline-first and be privacy-focused. https://objectbox.io/ does this but i would like to see AWS and others offer this as well.

    • greenavocado a day ago ago

      I am already using Qdrant very heavily for code dev (RAG) and I don't see that changing any time soon because its the primary choice for the tools I use and it works well

  • rubenvanwyk a day ago ago

    I don’t think it’s either-or, this will probably become the default / go-to - if you aren’t storing your vectors in your db like Neon or Turso.

    As far as I understand, Milvus is appropriate for very large scale, so will probably continue targeting enterprise.

  • anonu 18 hours ago ago

    If you like to die in a slow and expensive way - sure.

  • resters a day ago ago

    By hosting the vectors themselves, AWS can meta-optimize its cloud based on content characteristics. It may seem like not a major optimization, but at AWS scale it is billions of dollars per year. It also makes it easier for AWS to comply with censorship requirements.

    • coredog64 a day ago ago

      This comment appears to misunderstand the control plane/data plane distinction of AWS. AWS does have limited access to your control plane, primarily for things like enabling your TAMs to analyze your costs or getting assistance from enterprise support teams. They absolutely do not have access to your dataplane unless you specifically grant it. The primary use case for the latter is allowing writes into your storage for things like ALB access logs to S3. If you were deep in a debug session with enterprise support they might request one-off access to something large in S3, but I would be surprised if that were to happen.

      • resters a day ago ago

        If that is the case why create a separate govcloud and HIPAA service?

        • thedougd a day ago ago

          HIPAA services are not separate. You only need to establish a Business Associations Addendum (BAA) with AWS and stick to HIPAA eligible services: https://aws.amazon.com/compliance/hipaa-eligible-services-re...

          GovCloud exists so that AWS can sell to the US government and their contractors without impacting other customers who have different or less stringent requirements.

        • everfrustrated 12 hours ago ago

          Product segmentation. Certain customers self-select to pay more for the same thing.

    • barbazoo a day ago ago

      > It also makes it easier for AWS to comply with censorship requirements.

      Does it, how? Why would it be the vector store that would make it easier for them to censor the content? Why not censor the documents in S3 directly, or the entries in the relational database. What is different about censoring those vs a vector store?

      • resters a day ago ago

        Once a vector has been generated (and someone has paid for it) it can be searched for and relevant content can be identified without AWS incurring any additional cost to create its own separate censorship-oriented index, etc. AWS can also add additional bits to the vector that benefit its internal goals (scalability, censorship, etc.)

        Not to mention there is lock-in once you've gone to the trouble of using a specific embedding model on a bunch of content. Ideally we'd converge on backwards-compatible, open source approaches, but cloud vendors want to offer "value" by offering "better" embedding models that are not open source.

        • simonw a day ago ago

          Why would they do that? Doesn't sound like something that would attract further paying customers.

          Are there laws on the books that would force them to apply the technology in this way?

          • resters a day ago ago

            Not official laws that we can read, but things like that are already in place per the Snowden revelations.

        • whakim a day ago ago

          Regardless of the merits of this argument, dedicated vector databases are all running on top of AWS/GCP/Azure infrastructure anyways.

        • barbazoo a day ago ago

          And that doesn't apply to any other database/search technology AWS offers?

          • resters a day ago ago

            It does to some but not to most of it, which is why Azure and GCP offer nearly the exact same core services.

        • a day ago ago
          [deleted]
    • j45 a day ago ago

      Also, if it's not encrypted, I'm not sure if AWS or others "synthesize" customer data by a cursory scrubbing of so called client identifying information, and then try to optimize and model for those scenarios at scale.

      I do feel more and more some information in the corpus of AI models was done this way. A client's name and private identifiable information might not be in the model, but some patterns of how to do things sure seem to come up from such sources.

  • teaearlgraycold a day ago ago

    > Not too long ago, AWS dropped something new: S3 Vectors. It’s their first attempt at a vector storage solution

    Nitpick: AWS previously funded pgvector (the slow down in development indicates to me they have stopped). Their hosted database solutions supported the extension. That means RDS and Aurora were their first vector storage solutions.

  • j45 a day ago ago

    The cloud is someone else's computer.

    If it's this sensitive, there's a lot of companies staying on the sidelines until they can compute in person, or limiting what and how they use it.

  • giveita a day ago ago

    Betteridge can answer No to two questions at once!

  • curtisszmania 8 hours ago ago

    [dead]

  • Fendy a day ago ago

    what do you think?

    • sharemywin a day ago ago

      it's annoying to me that there's not a doc store with vectors. seems like the vector dbs just store the vectors I think.

      • simonw a day ago ago

        Elasticsearch and MongoDB Atlas and PostgreSQL and SQLite all have vector indexes these days.

        • KaoruAoiShiho a day ago ago

          > MongoDB Atlas

          It took a while but eventually opensource dies.

      • CuriouslyC a day ago ago

        My search service Lens returns exact spans from search, while having the best performance both in terms of latency and precision/recall within a budget. I'm just working on release cleanup and final benchmark validation so hopefully I can get it in your hands soon.

      • storus a day ago ago

        Pinecone allows 40k of metadata with each vector which is often enough.

      • whakim a day ago ago

        Elasticsearch and Vespa both fit the bill for this, if your scale grows beyond the purpose-built vector stores.

      • jeffchuber a day ago ago

        chroma stores both

        • nkozyra a day ago ago

          As does Azure's AI search.

      • intalentive a day ago ago

        I just use sqlite