eh
I have tried rewriting this to use local variables, optimize for unknown\variable\etc, with recompile.
The first two methods listed here are pretty gosh-heckin' useless most of the time for making performance better. You may run into a situation every 5-10 years where they make one specific query better, but I'd keep those pretty low on my list of things to try.
In the slow plan, the part that sucks up the most time looks like this:

You've apparently got a computed column on the Catalog_Items table that generates the MD5, and it's not persisted. You don't have to persist it, but the reason why you end up with a Filter operator on an expression, and you can't apply the predicate while you scan the table, is because it has to be calculated per-row at runtime.
In the query that's fast, you've got CategoryCodeHash in the index [IX_Catalog_Items_IdRetailer_CategoryCodeHash], which does persist the value and makes it seekable.
As to why you got such a bad estimate in the slow plan, it's unclear if the @CategoryHash argument is a parameter or local variable in this context, but...
<ParameterList> <ColumnReference Column="@TransactionDescriptionId" ParameterDataType="varchar(32)" ParameterCompiledValue="'Instacart'" ParameterRuntimeValue="'Instacart'" /> <ColumnReference Column="@CategoryHash" ParameterDataType="binary(16)" ParameterRuntimeValue="0xD1DC3A8270A6F9394F88847D7F0050CF" /> </ParameterList>
The fact that @TransactionDescriptionId has the ParameterCompiledValue attribute, and @CategoryHash does not seems to indicate that it's not a sniffed parameter. In the fast plan, neither has the ParameterCompiledValue attribute, but without the full repro I don't know exactly what you did differently.
anyway
SQL Server's cost-based optimizer either found the seek + lookup more expensive than the clustered index scan + filter, or never arrived at the seek + lookup plan. Many people will tell you to "make the index covering" so that all of the columns you need would be available in the [IX_Catalog_Items_IdRetailer_CategoryCodeHash] index. That's an option, of course, but you can also do things to trick the optimizer into using narrow indexes, like using a self-join.
SELECT TOP (1) ci2.IdItem, ci2.IdCatalog, ci2.IdRetailer, ci2.LastUpdatedFromIdFile, ci2.Upc11, ci2.Upc12, ci2.Gtin, ci2.[Description], ci2.ShortDescription, ci2.CategoryName, ci2.SubCategoryName, ci2.FineLineCategoryName, ci2.SubCategoryCode, ci2.Manufacturer FROM ( SELECT DelegateMerchantDescription = ISNULL(delegateRetailers.TransactionDescriptionId, retailers.TransactionDescriptionId), retailers.TransactionDescriptionId, retailers.MerchantId FROM Retailers AS retailers (NOLOCK) LEFT JOIN Retailers AS delegateRetailers (NOLOCK) ON delegateRetailers.TransactionDescriptionId = retailers.DelegateCatalogMerchantDescription WHERE retailers.TransactionDescriptionId IS NOT NULL ) AS delegate JOIN Retailers AS r ON r.TransactionDescriptionId = delegate.DelegateMerchantDescription JOIN Catalog_Items AS ci WITH (NOLOCK) ON ci.IdRetailer = r.IdRetailer /*This is new*/ JOIN Catalog_Items AS ci2 WITH (NOLOCK) /*I'm not sure what the PK of Catalog_Items is*/ ON ci2.primary_key_column = ci.primary_key_column WHERE ci.CategoryCodeHash = @CategoryHash AND ( delegate.TransactionDescriptionId = @TransactionDescriptionId OR delegate.MerchantId = @TransactionDescriptionId );
The idea is to join a table to itself on primary key values, with one reference to the table used for selecting all the columns you want to display, and the other reference responsible for joining and filtering and any other relational activities.
contextually
One thing that may be worth looking at as to why the plan is different would be various SET options. Some are required for indexed views, computed columns, and filtered indexes to be evaluated during optimization:
The NUMERIC_ROUNDABORT option must be set to OFF, and the following options must be set to ON:
ANSI_NULLS ANSI_PADDING ANSI_WARNINGS ARITHABORT CONCAT_NULL_YIELDS_NULL QUOTED_IDENTIFIER
The DatabaseContextSettingsId is different for the two plans:

vs.

You can see a couple of the settings here:
SELECT procedure_name = p.name, sm.uses_ansi_nulls, sm.uses_quoted_identifier FROM sys.procedures AS p JOIN sys.sql_modules AS sm ON p.object_id = sm.object_id WHERE p.name = 'YourProcedureName';
You can also see if the database has any defaults that go against the requirements for all modules created with no specification otherwise:
SELECT database_name = name, is_ansi_null_default_on, is_ansi_nulls_on, is_ansi_padding_on, is_ansi_warnings_on, is_arithabort_on, is_concat_null_yields_null_on, is_numeric_roundabort_on, is_quoted_identifier_on, is_recursive_triggers_on, is_cursor_close_on_commit_on, is_local_cursor_default FROM sys.databases WHERE name = 'YourDatabaseName';
Depending on what you see, you may want to try re-creating the stored procedure with the following:
SET ANSI_NULLS ON; SET ANSI_PADDING ON; SET ANSI_WARNINGS ON; SET ARITHABORT ON; SET CONCAT_NULL_YIELDS_NULL ON; SET QUOTED_IDENTIFIER ON; SET NUMERIC_ROUNDABORT OFF;
some other stuff
For your general consideration, though not necessarily a big deal for this specific query:
NOLOCK stinks and can return incorrect/inconsistent data TOP without an ORDER BY is an iffy proposition @Table Variables force modifications to run single-threaded, and are not usually so hot when you start joining them off to other tables as they don't get column-level statistics even when indexed
Note the CouldNotGenerateValidParallelPlan attribute of the INSERT operator. In this case though, both plan costs are so low they are unlikely to get a parallel execution plan anyway.