You haven't provided anything to back up your measurements, so here is a setup which proves the opposite:
MyContract.sol:
pragma solidity 0.4.24; contract MyContract { struct Obj { uint32 a; uint32 b; uint32 c; } mapping(string => Obj) internal objs; function set1(string objName, uint256 obja, uint256 objb, uint256 objc) external { Obj memory obj; obj.a = uint32(obja); obj.b = uint32(objb); obj.c = uint32(objc); objs[objName] = obj; } function set2(string objName, uint256 obja, uint256 objb, uint256 objc) external { objs[objName].a = uint32(obja); objs[objName].b = uint32(objb); objs[objName].c = uint32(objc); } }
MyContractTest.js:
contract("MyContractTest", function(accounts) { it("performance measurement:", async function() { let myContract = await artifacts.require("MyContract.sol").new(); for (let a = 0; a < 10; a++) { for (let b = 0; b < 10; b++) { for (let c = 0; c < 10; c++) { let set1gas = await myContract.set1.estimateGas("test", a, b, c); let set2gas = await myContract.set2.estimateGas("test", a, b, c); console.log(`set1gas = ${set1gas}, set2gas = ${set2gas}`); } } } }); });
You can run this via truffle test MyContractTest.js, and observe that the first method is in fact around 10K gas units cheaper than the second method.
As mentioned in the first answer, you may have experienced the same issue as when conducting performance (speed) measurement without taking into consideration the impact of the current state of the cache memory (which affects the caching heuristics applied by the underlying platform during runtime).
In a sense, measuring gas consumption on the block-chain is similar to measuring time performance on "standard" machines.
Obj memory obj, correct? i.e. your're actually using a memory struct, not a storage one as a temporary variable.Obj memory obj = objs[objName]earlier on.Objvariable, then it is actually cheaper to use the memory route, but when you use an array of them, then it becomes more expensive. Similarly with mappings.