Cache lockdown is almost never a good thing for performance for general purpose application code. Caches are nearly always smaller than the total overall code/data set on a platform. If you lock down 25% of the cache to accelerate one application, you effectively make the cache 25% smaller for everything else. You may make one application faster (possibly, if it happens to have a small data set, although even this is questionable for most applications which are bigger than the L2 size), but at the expense of making everything else running on the platform slower. For this reason cache lockdown is nearly always "the wrong thing", even if it were available. For most software development there is unfortuantely no quick fix to making software run faster; you just need to profile and optimize your application hotspots to use better algorithms, cleaner code, less memory, etc.
The only real use case for cache lockdown is for critical sections of hard-realtime systems where guaranteed performance of small code sections (interrupt handlers and the like) is required, and the overall loss of cache (and drop in peformance for everything else) is viewed as an acceptable sacrifice to achieve that predictable response time. It is also worth noting that in many markets needing realtime response TCM is generally available as a synthesis option in the Cortex-R family, so even in those markets there are better alternatives to cache lockdown which provide better area efficiency.
In summary - cache lockdown generally makes your platform slower (due to smaller average cache size remaining after lockdown), but buys predictable performance time for critical realtime sections. It is not, and never has been, an optimization techique to make application code run faster.