each_with_object vs reduce — Choosing the Right Ruby Aggregator
Ruby gives you multiple ways to aggregate a collection into a single result: reduce (aliased as inject), each_with_object, and occasionally tally, sum, or group_by for specific cases. Developers tend to reach for reduce out of habit or familiarity, but each_with_object is often the cleaner choice — and understanding why helps you pick the right tool quickly instead of translating mentally between the two every time.
The Core Difference
Both methods iterate a collection and build up a result. The difference is in how the accumulator works.
reduce passes the accumulator into each block iteration, and the block’s return value becomes the new accumulator. The accumulator itself gets replaced on each step.
each_with_object passes a single object that persists across all iterations. You mutate it directly in the block; the block’s return value is ignored. The original object is returned at the end.
This distinction is subtle but consequential.
reduce — Best for Transforming into a New Value
reduce is ideal when you’re computing something where each iteration produces a new value: a sum, a product, a string built by concatenation, a new hash constructed from scratch.
Example:
# Sum of squares
[1, 2, 3, 4, 5].reduce(0) { |sum, n| sum + n**2 }
# => 55
# Flatten with reduce (illustration only — use flatten in practice)
[[1, 2], [3, 4], [5]].reduce([]) { |acc, arr| acc + arr }
# => [1, 2, 3, 4, 5]
# Build a lookup hash from an array
users = [{ id: 1, name: "Alice" }, { id: 2, name: "Bob" }]
users.reduce({}) { |hash, u| hash.merge(u[:id] => u[:name]) }
# => { 1 => "Alice", 2 => "Bob" }
The hash.merge(...) pattern is worth scrutinizing: each iteration creates a new hash object. For small collections this is fine. For large ones, merge in a reduce loop allocates many intermediate hashes.
each_with_object — Best for Building Into a Mutable Structure
each_with_object shines when you’re accumulating into a mutable structure — a hash, an array, a set — that you’re building up by mutation:
Example:
# Build the same lookup hash — more efficient
users = [{ id: 1, name: "Alice" }, { id: 2, name: "Bob" }]
users.each_with_object({}) { |u, hash| hash[u[:id]] = u[:name] }
# => { 1 => "Alice", 2 => "Bob" }
# Group by computed attribute
words = ["apple", "ant", "banana", "bear", "cherry"]
words.each_with_object(Hash.new { |h, k| h[k] = [] }) do |word, groups|
groups[word[0]] << word
end
# => { "a" => ["apple", "ant"], "b" => ["banana", "bear"], "c" => ["cherry"] }
# Collect transformed values while skipping nils
data = [1, nil, 2, nil, 3]
data.each_with_object([]) do |item, result|
result << item * 2 if item
end
# => [2, 4, 6]
No intermediate object allocations. The same hash or array is mutated on each iteration. For large collections or frequent calls, this matters.
The block argument order
One gotcha: each_with_object passes (element, accumulator), while reduce passes (accumulator, element). The accumulator is second in each_with_object, first in reduce. This trips people up when switching between the two.
Example:
# reduce: accumulator first
[1,2,3].reduce(0) { |acc, n| acc + n }
# each_with_object: accumulator second
[1,2,3].each_with_object([]) { |n, acc| acc << n * 2 }
When the Block Return Value Matters
This is where the choice becomes critical. In reduce, the block return value becomes the next accumulator. Forgetting this causes silent bugs:
Example:
# BUG — puts returns nil, so acc becomes nil on second iteration
[1, 2, 3].reduce(0) do |acc, n|
puts "processing #{n}" # returns nil
acc + n # this line is what you meant to return, but it IS returned
end
# => 6 (fine here because acc + n is the last expression)
# BUG — if you add a puts AFTER the accumulator expression
[1, 2, 3].reduce(0) do |acc, n|
result = acc + n
puts "running total: #{result}" # returns nil — THIS becomes the new acc
result # must explicitly return the accumulator
end
With each_with_object, the block return value is irrelevant — you can have puts, conditional statements, or anything as the last expression without breaking accumulation. The accumulator object is mutated directly and returned by the method regardless.
Choosing Between Them
| Situation | Prefer |
|---|---|
| Computing a number (sum, product, count) | reduce |
| Building a new immutable value per step | reduce |
| Mutating a hash being built up | each_with_object |
| Mutating an array being collected | each_with_object |
| Block has side effects alongside accumulation | each_with_object |
| Performance with large collections | each_with_object (avoids intermediate allocs) |
Pro-Tip: For specific aggregations, Ruby’s specialized methods beat both:
tallycounts occurrences,group_bygroups by a key,sumadds numbers,flat_mapmaps and flattens. Before reaching forreduceoreach_with_object, check if a purpose-built Enumerable method does exactly what you need — it’ll be more readable and often faster.
Practical Example: Invoice Line Item Summary
Example:
line_items = [
{ product: "Widget A", qty: 2, price: 19.99 },
{ product: "Widget B", qty: 1, price: 49.99 },
{ product: "Widget A", qty: 3, price: 19.99 }
]
# each_with_object: build grouped totals
summary = line_items.each_with_object(Hash.new(0)) do |item, totals|
totals[item[:product]] += item[:qty] * item[:price]
end
# => { "Widget A" => 99.95, "Widget B" => 49.99 }
# reduce: compute total value
total = line_items.reduce(0) { |sum, item| sum + item[:qty] * item[:price] }
# => 149.94
Both in the same snippet, doing what each does best: each_with_object for the mutable hash, reduce for the scalar sum.
Conclusion
reduce and each_with_object are not interchangeable — they’re complementary. reduce is clean for transforming into a new value; each_with_object is clean for building into a mutable structure. The block argument order and block return value behavior are the practical differences that matter most day-to-day. Once you internalize the pattern — mutation goes to each_with_object, transformation goes to reduce — choosing between them becomes automatic rather than a reasoning exercise every time.
FAQs
Q1: Is inject the same as reduce?
Yes, exactly. inject is the original Ruby name; reduce was added as an alias to match conventions in other languages. They’re identical. Most style guides prefer reduce for clarity.
Q2: Can I use each_with_object with an immutable accumulator like a frozen string?
No — each_with_object works by mutating the accumulator object. Frozen objects can’t be mutated. Use reduce when you need to build up an immutable value through replacement rather than mutation.
Q3: Is there a performance difference between the two?
For mutable structures (hashes, arrays), each_with_object is faster because it doesn’t allocate a new object on each iteration. For scalar values (numbers, strings built by concatenation), reduce is the natural fit and there’s no meaningful performance difference.
Q4: What’s the difference between reduce with an initial value and without one?
Without an initial value, reduce uses the first element as the accumulator and starts iterating from the second. This is fine for simple sums but breaks for empty arrays (raises LocalJumpError) and produces unexpected results when the first element should be processed, not used as seed. Always provide an explicit initial value for clarity and safety.
Q5: When should I use tally instead of each_with_object?
When you’re counting occurrences of elements. [1,1,2,3,3,3].tally → {1=>2, 2=>1, 3=>3}. It’s more expressive than each_with_object(Hash.new(0)) { |n, h| h[n] += 1 } and makes intent immediately clear.
Check viewARU - Brand Newsletter!
Newsletter to DEVs by DEVs - boost your Personal Brand & career! 🚀