Here's another question as I get into the use of memory tables. I haven't seen any mention of this. If we put a significant portion of our database into memory tables, is this going to increase our use of CPU?
From a few preliminary tests it looks like CPU usage can be noticeably higher when using CPU tables compared to using regular tableswhich already have data in memory.
So, say you have a 100gb database, and you put 25% of it into memory-optimized tables, and this table, overall, accounts for a lot of your database activity or you'd hardly bother to put it in memory. Let's say it normally associates in some way with about 50% of CPU activity. If the increase to use memory tables is even 50%, then total CPU utilization may go up 25%. I think that's worth a mention in planning guides.
OTOH these are very rough estimates with a strong assumption about data already in memory. It may be that even with your main data tables in memory, still 75% of CPU activity is related either to tempdb or to system processes. Then a 25% increase in *user* activity may be a smaller net increase in overall CPU usage.
Of course I'd love to hear from anyone who has done this. I don't seem to have a server handy where I can just load it all up to see what there is to see. Have to work on that I guess.
Josh
ps - which immediately leads one to ask about putting tempdb into memory tables but surely that's another topic. :)
pps: yes, memory table vars!
https://msdn.microsoft.com/en-us/library/mt718711.aspx
Not a full answer, but a good step forward.