You thought 68k was good, 90k is better 😏 Mac users, welcome to Mojo🔥! Download this Thursday at hubs.ly/Q025PkjV0 🤘🏼💯 pic.twitter.com/ku6VuNKT8n
2023-10-18 01:03:04@Modular_AI Screenshot recap for people who struggled with the pause button (like me 😅) pic.twitter.com/tth3kyrSR3
2023-10-18 20:59:05Today is the day! Mojo for Mac is live! 🔥 😱 🚀 Download it right now! ❤️🔥 Read our launch blogpost on how to get started ⬇️ hubs.ly/Q026571V0
2023-10-20 00:18:59@Modular_AI Exciting! I got early access to the Mojo SDK (Mac) week ago, and compared it's performance on baby-llama inference. Mojo VS Rust, C, Cpp, Go, Zig, and Julia. In total 12 implementations on 7 languages x 3 model x 30 rounds Check this out engiware.com/benchmark/llam…
2023-10-20 02:00:50@tairov @Modular_AI interesting, why the variance a bit more , possible fine tuning incoming? any idea?
2023-10-20 05:15:02@raxtechbits @Modular_AI Haven't dive into this, I ran all rounds with temperature = 0 and where possible seed for randomizer = 100, possible there is still some place where randomization factor impacting benchmark.
2023-10-20 07:06:00@Modular_AI REPL doesn't execute in Mac. Is the JIT compiler working properly there?
2023-10-20 19:00:48@Modular_AI I get the following error: I have MacOS M1 Max: modular: The arm64 architecture is required for this software. Error: modular: An unsatisfied requirement failed this build. Am I missing sth?
2023-10-20 06:35:17I believe that's... 263293x faster than Python, and 2.83x faster than Mojo🔥 twitter.com/Modular_AI/sta… pic.twitter.com/LoiiudijOD
2023-10-18 10:47:32We make tinygrad. Our mission is to commoditize the petaflop.
@__tinygrad__ These are really great results TinyCo, are these on the same HW as Python is running on, or is it on something like the GPU? If the later, stay tuned for some interesting things later 🤭
2023-10-18 12:35:20@clattner_llvm Thanks! It’s the GPU, M1 Max. Yours is CPU multi core? Using AMX instructions?
2023-10-18 15:16:19@clattner_llvm We don't support multicore, here's our single core M2 Max perf (no AMX). About 1/12th of yours. pic.twitter.com/zWv5V8Tl5b
2023-10-18 15:47:49@__tinygrad__ Yep the numbers above are just multicore CPU, without AMX
2023-10-19 00:47:51@clattner_llvm @__tinygrad__ Let's have a peek under the hood... What? The secret sauce is LLVM?
2023-10-19 01:08:27@__tinygrad__ @ggerganov Llama.cpp still ahead of the game pic.twitter.com/31XRgGqbzi
2023-10-18 20:52:44@tierotiero @ggerganov Good to see they are benchmarking us! Yea llama.cpp is impressively fast at quantized stuff, we’ll get there.
2023-10-18 23:37:36Mojo🔥on Apple Silicon. Llama2.mojo up to 960 tokens/sec. This will be an interesting evening today. modular.com/blog/mojo-is-n…
2023-10-20 22:13:12Mojoがmacに対応! MojoはAI開発で通常使われるpythonの上位互換仕様の言語処理系で処理速度が少なくとも数十倍速いとのことで、今週末試してみる! >Mojo🔥 is now available on Mac modular.com/blog/mojo-is-n…
2023-10-21 09:38:55さすがChatGPT、Bing使ったらもうmojo🔥書けるようになったっぽい アシスト頼んだぞ pic.twitter.com/Ccj6VouBxl
2023-10-20 02:25:00