Welcome! Please see the About page for a little more info on how this works.

+1 vote
in On-Prem by

I'm seeing a fair frequency of :memcached/item-too-large statements in my Datomic logs.

Is this a size limitation internal to Datomic? If so, could it mean our system is missing out on some cached segments?

A recent datomic.process-monitor log statement is showing:

{:tid 23,
 :CacheRepair {:lo 1, :count 637, :sum 637, :hi 1},
 :ObjectCacheCount 16634,
 :PeerAcceptNewMsec {:lo 0, :count 348, :sum 1, :hi 1},
 :MemcachedPutSucceededMsec {:lo 0, :count 602, :sum 1276, :hi 177},
 :AvailableMB 5550.0,
 :Memcache {:lo 0, :count 12175, :sum 11537, :hi 1},
 :StorageGetMsec {:lo 2, :count 637, :sum 10089, :hi 831},
 :MemcacheItemTooLarge {:lo 1, :count 4, :sum 4, :hi 1},
 :pid 18,
 :event :metrics,
 :ObjectCache {:lo 0, :count 702668, :sum 690336, :hi 1},
 :MetricsReport {:lo 1, :count 1, :sum 1, :hi 1},
 :PeerFulltextBatch {:lo 1, :count 332, :sum 348, :hi 6},
 :DbAddFulltextMsec {:lo 0, :count 21, :sum 30, :hi 12},
 :MemcachedPutFailedMsec {:lo 0, :count 31, :sum 541, :hi 52},
 :StorageGetBytes {:lo 122, :count 637, :sum 92206399, :hi 31294091}}

Which would seem to suggest a high hit rate judging from the :Memcache :sum / :count

Is this something I can fix or would cause issues?

Thanks!!

1 Answer

+1 vote
by

MemcacheditemTooLarge represents a segment that is too big to fit in memcached; the limit for this is (i think) 1Mb. Segments are made up of many datoms and can become large when you’re storing blobs. However, they can also be large due to frequently updated string values with the same leading sections and another common cause is large (as in # of datoms) transactions. Datomic does not split individual transactions across segment boundaries.

Having the occasional item too large to fit into memcached can be expected, but I would recommend investigating to make sure that:

  1. your hit rate is not dropping too low (something you've already reviewed)
  2. understand what particular flavor of item too large you have and see if you can address

You can investigate further by looking at what particular segment was too large to fit into memcached and addressing what you're storing or how large your transactions are.

We'd be happy to walk you through the process of investigating a segment via our support portal. Please feel free to log a ticket here and we can describe the process:

https://support.cognitect.com/hc/en-us/requests/new

Documentation on Memcached

by
Jaret,

Thanks for the answer! Luckily it seems our memcached hit ratio is pretty close to 1.0 so that's good. We do have some blobs being stored as string values which could explain the problem.

I'll reach out about investigating the segment itself, as this would help us understand next steps
Welcome to the Datomic Knowledgebase, where you can make features requests, ask questions and receive answers from other members of the community.
...