user_mails = UserMail.all
data = {user_mails: user_mails}
File.open("demo_data.json", "w") { |f| f.write data.to_json }
json_data['user_mails'].each do |data|
UserMail.where(data).first_or_create!
end
When i trying this my system get stuck and get hang then i need to restart my system, i found that there is problem in "
data.to_json
when i do this alone then also system get hang so there is no problem during writing file, also note that "
user_mails
contain very large data sets, that may be issues.
activity_logs = ActivityLog.all
consumers = Consumer.all
data = {activity_logs: activity_logs, consumers: consumers }
File.open("demo_data.json", "w") { |f| f.write data.to_json }
this is working fine without any problem because these has medium data sets .
I already tryed zipping and compress but it is not working because it is problem of "data.to_json"
Please give solution how can i achive this
2
Answers
Yes tried and above answer is working fine, Thanks, And i have also tried this in different approach to reach destination (transfer large set data using db_dump). here is code :
run this in terminal, It may not proper way but i can do my work
That’s an interesting problem.
When you call
data.to_json
it loads all the records from the database and for each row it instantiates anActiveRecord
Ruby object, and finally it serializes back each row to JSON. For >1000 rows this isn’t efficient, especially in Ruby.As suggested in a comment, you’ll need first to query the records efficiently in batches, then append each of them (or in batches) to the JSON file. Here’s a proof of concept:
Note: in this POC you’ll need to find a way to remove the trailing comma from the last element to get a valid JSON.
References:
#find_each
: https://apidock.com/rails/ActiveRecord/Batches/find_each#find_in_batches
: https://apidock.com/rails/ActiveRecord/Batches/find_in_batches