I have a fairly large dataset (~50K entries) which I use to generate a correlation matrix. This works well, using "only" ~20GB RAM.
Then, I want to extract only the unique pairwise combinations from it and convert it into a data frame. This is where I run into issues. Either too much RAM usage or overflowing the indexing variable(s). I know there are >2B combinations, so I am aware it explodes a bit in size, but still..
I have tried different ways to achieve this, but with no success.
Mock data:
df = matrix(runif(1),nrow=50000, ncol=50000, dimnames=list(seq(1,50000,by=1), seq(1,50000,by=1)))
Trying to extract upper/lower triangle from the correlation matrix and then reshape it:
df[lower.tri(df, diag = T),] = NA
df = reshape2::melt(df, na.rm = T)
crashes with:
Error in df[lower.tri(bla, diag = T), ] = NA :
long vectors not supported yet: ../../src/include/Rinlinedfuns.h:522
It crashes with the same error if you do only: df = df[lower.tri(df, diag = T),]
(I did read through Large Matrices in R: long vectors not supported yet but I didn’t find it helpful for my situation)
I also tried:
df = subset(as.data.frame(as.table(df)),
match(Var1, names(annotation_table)) > match(Var2, names(annotation_table)))
to use only R-base packages, but it eventually ran out of memory after ~1 day. This is the most RAM intensive part: as.data.frame(as.table(df))
so I tried also replacing it with reshape2::melt(df)
but it also ran out of RAM
I am running the code on an Ubuntu machine with 128GB RAM. I do have larger machines, but i would’ve expected that this amount of RAM should suffice.
Any help would be highly appreciated. Thank you.
2
Answers
Okay, after digging a bit more and trying other stuff, I found one solution that eventually worked in a previous post:
For reference, testing it out on my real data (49,100 x 49,100 correlation matrix):
If you have such large datasets, I really suggest you call the garbage collector between the two commands as it actually helped. Timewise, it took less than 10 minutes. It is not ideal, but given my setup and time constraints, it is a solution.
Thank you @Robert Hacken for spotting that erroneous
,
indf[lower.tri(df, diag = T),] = NA
(i.e., the comma before the closing bracket should be removed.I think that what @Mikael Jagan has proposed might be more memory efficient, but I did not manage to successfully run his code.
If you have as much RAM as you say, then this really should work without issue for
n
much larger than 6. If you see errors not related to memory usage, then you should share the code that you evaluated, since probably you have made a mistake adapting the example …If you are using a version of R older than 4.0.0, where
sequence
is defined differently, then you’ll want something like: