1. How silly to write such a thing in C from scratch. Such a project will invariably invent half of Lisp in order to have the right kind of infrastructure for doing this and that.
2. Let's look for some of it up and down the tree. Oh look, there is a bitset and hashmap, see? I don't see test cases for these anywhere; is it original work from this project or battle-tested code taken from elsewhere?
Yea I saw that after looking it up. I wasn't questioning the statement, but me personally I wouldn't look at each file and look for license violations. That's all.
Sometimes it returns a static string like `g_str_int` and sometimes a newly heap-allocated string, such as returned by `class_type_array_name(g_str_int, depth)`.
Callers have no way to properly release the memory allocated by this function.
In multi-threaded mode, each thread will create a separate memory pool. If in single-threaded mode, a global memory pool is used. You can refer to https://github.com/neocanable/garlic/blob/72357ddbcffdb75641.... The x_alloc and x_alloc_in in it indicate where the memory is allocated. When each task ends, the memory allocated in the memory pool is released, and the cycle repeats.
Many command line tools do not need memory management at all, at least to first approximation. Free nothing and let the os cleanup on process exit. Most libraries can either use an arena internally and copy any values that get returned to the user to the heap at boundaries or require the user to externally create and destroy the arena. This can be made ergonomic with one macro that injects an arena argument into function defs and another that replaces malloc by bumping the local arena data pointer that the prior macro injected.
That might be true, but leaking is neither the critical nor the most hard to find memory management issue, and good luck trying to adapt or even run valgrind with a codebase that mindlessly allocates and leaks everywhere.
It can still be a bug if you use something after you would have freed it because your code isn't meant to be using that object any more. It points to errors in the logic.
This project is my first project written in C language. Before this, my C language level was only at printf("hello world"). I am very happy because this project made me dare to use secondary pointers.
> I am always curious how different C programs decide how to manage memory.
At a basic level, you can create memory on the stack or on the heap. Obviously I will focus on the heap as that is dynamically allocating memory of a certain size.
The C programming language does not force you how to handle memory. You are pretty much on your own. For some C programmers (and likely more inexperienced ones) they will malloc individual variables like they are creating a 'new' instance in a typical OOP language like Java. This can be a telltale sign of a programmer working with C that comes from an OOP background. As they learn and improve on their C skills they realise they should create a chunk of memory of a certain type, but could still be malloc(ing) and free(ing) all over the code, making it difficult to understand what is being used and where -- especially if you are looking at code you did not write.
You can also have programs that do not bother free(ing) memory. For example, a simple shell program that just does simple input->process->output and terminates. For these types of programs, just let the OS deal with freeing the memory.
Good C code (in my opinion) uses malloc and free in only a handful of functions. There are higher level functions for proper Allocators. One example is an Arena Allocator. Then if you want a function which may require dynamic memory, you can tell it which allocator to use. It gives you control, generally speaking. You can create a simple string library or builder with an allocator.
Of course an Allocator does not have to use memory on the heap. It can still use on the stack as well.
There are various other patterns to use in the world of memory, especially in C.
I am writing the part of decompiling dex and apk. The current speed is about 10 times faster than that of Java, and it takes up less resources than Java. And the compiled binary is smaller, only about 300k. Thank you for your attention.
This has been my life experience with things written in C/C++, so speed doesn't matter. Or, I guess from an alternative perspective, it ran very fast, but exited very fast, too :-D
Is it? This is my experience with Python. The C/C++ programs I use daily never seem to crash (Linux, bash, terminals, X, firefox, vim, etc.). It must be years ago one of those programs crashed while I used it.
Also a segfault IS the protection layer intervening, it is equivalent to a exception in other languages. The real problem is, when there is no segfault.
This is absolutely true. But even this does not happen in the software I use every day. Software written is C is definitely the most stable I use - by far. That there are people running around claiming that it is impossible to write stable software in C and it crashes all the time due to bugs is rather unfortunate, as it is far from the truth.
The readme shows support for dumping dex files. Edit: missed that it has a comment that stays "unsupport for now" but at least it looks like something planned
Nice job! I don't know whether you know https://github.com/java-decompiler/jd-gui or not, but in case you haven't seen it before, maybe you could use it as a reference, since it's written in Java, for extra fun with your adventure?
Things may have changed, but my impression as of several years ago was that JD-GUI was far, far behind the state of the art (Fernflower, aka the built-in IntelliJ decompiler) in terms of correctness, re-sugaring, support for modern Java features, and so on. Fernflower is open source as part of IntelliJ: https://github.com/fesh0r/fernflower
Not that I know of. The features I'd want in order to consider a decompiler GUI "good" (e.g. a good text editing control, go-to-definition, find usages, manual renaming of obfuscated symbol names) quickly approach the scope of an entire IDE, though.
I think that sort of ratio is the sweet spot for learning. I've been writing an 8086 simulator in C++ and using an LLM for answering specific technical questions I come up with has drastically sped up my progress without it actually doing the work for me.
They can, if you write down your thought process, which is probably what you should do when you are using an LLM to create a product, but what do I know.
You do not have to be as accurate or that specific, you do not have to worry about the way you word or organize things, the LLM can figure it out, as opposed to a blog post.
So "To some people the process leading to a finished project is the most interesting thing about posts like these." is bullshit, that is said by someone who has never used LLM properly. You can achieve it with LLMs. You definitely can, I know, I did, accurately (I double checked).
How come? You had different experiences? Which LLMs, what prompts? Give me all the details that supports your claim that it is not true. My experiences completely differ from yours, so the way I use it, it is very much true.
That said, it is probably pointless to argue with full-blown AI-skeptics.
People had lots of great and productive-enhancing experiences with LLMs, you did not, great, that does not reflect the tool, it reflects your way of using the tool.
Of course it can be done! It wouldn't be as general purpose as the Java decompiler in C because the C decompiler would have to know about the CPU architecture of the executable code (just as the Java decompiler has to know about JVM opcodes).
I moved from C++ to C and I am more productive. I also think this "no seat belts" meme is exaggerated, as there are plenty of tools and strategies to make C fairly safe to use. (it is true though that many people do not put the seat belts on).
In my experience, although many of the other programming languages do improve some things compared with C, they also make many things worse and avoid some of the benefits of C programming.
I cannot help but wonder why I would learn a whole new language before even beginning to start a new project when I already know C. Though, generally speaking, I tend to use C++ for new projects -- usually depending on what libraries I'm using, if the lib is in C I use C and if the lib is in C++ I use C++. The current thing I'm working on is intended as a Python extension module and Python is written in C so...
And, yes, I know it's trivial to interface the Python C-API with C++ and quite often better as the 'object model' is very similar but the underlying concept I wanted to explore (guaranteed tailcalls) isn't possible in C++ from what I can tell.
This is the best question for me. Writing these codes in C language is the best way to learn the file structure of jvm/dalvik/pe. This process makes me like C language more. For me, I think it is simple and pure, which is enough.
Rust certainly does have some improvements, but I'm not 100% certain that it's the best tool for all low-level software. For example, I'm experimenting with Rust for some filesystem type code and I can't figure out how to write/read a struct to/from disk all at once. I'm brand new to Rust, so it's quite possible that it can be done and I just don't know the technique. Basically, I'm looking for something in Rust analogous to C's fread/fwrite. I know I can write out each field of the struct individually, but when the struct has many fields it means having to write a huge amount of nasty boilerplate code when in C it's a single function call (fread/fwrite).
This is generally unsafe, so to make it safe there needs to be something that restricts what kind of things you can read and write.
For example, if your structure contains a reference, and you read an instance of that from disk, then you now have a potentially invalid reference, bypassing Rust's guarantees. Reading a structure of i32 numbers is safe, but it also has endianness footguns.
The zerocopy crate implements traits and gives you a derive macro to mark types as being safe to serialize/deserialize in a safe way.
I love Rust but we really got to stop the link between C and Rust.
If someone mentions C, that's not a free invite to start educating them on why they SHOULD use Rust. No one at the party is going to talk to you again that night
It's thanks to people like you that rust is not more widely used, you actively make people avoid the rust cummunity because they will think everybody i like you!
https://opensource.stackexchange.com/questions/10737/inclusi...
Do you have a scanner that checks these sorts of things or is it something that you are passionate about?
1. How silly to write such a thing in C from scratch. Such a project will invariably invent half of Lisp in order to have the right kind of infrastructure for doing this and that.
2. Let's look for some of it up and down the tree. Oh look, there is a bitset and hashmap, see? I don't see test cases for these anywhere; is it original work from this project or battle-tested code taken from elsewhere?
3. Open hashmap.c ...
GPL violation found in half a minute.
In this case there are is a custom string library. Functions returned owned heap-allocated strings.
However, I think there's a problem where static strings are used interchangably with heap-allocated strings, such as in the function `string class_simple_name(string full)` ( https://github.com/neocanable/garlic/blob/72357ddbcffdb75641... )
Sometimes it returns a static string like `g_str_int` and sometimes a newly heap-allocated string, such as returned by `class_type_array_name(g_str_int, depth)`.
Callers have no way to properly release the memory allocated by this function.
Of course literally running valgrind is still possible, but it is difficult to get useful information.
That's the beauty of the never free memory management strategy.
I think some <ctype.h> implementations are hardened against this issue, but not all.
At a basic level, you can create memory on the stack or on the heap. Obviously I will focus on the heap as that is dynamically allocating memory of a certain size.
The C programming language does not force you how to handle memory. You are pretty much on your own. For some C programmers (and likely more inexperienced ones) they will malloc individual variables like they are creating a 'new' instance in a typical OOP language like Java. This can be a telltale sign of a programmer working with C that comes from an OOP background. As they learn and improve on their C skills they realise they should create a chunk of memory of a certain type, but could still be malloc(ing) and free(ing) all over the code, making it difficult to understand what is being used and where -- especially if you are looking at code you did not write.
You can also have programs that do not bother free(ing) memory. For example, a simple shell program that just does simple input->process->output and terminates. For these types of programs, just let the OS deal with freeing the memory.
Good C code (in my opinion) uses malloc and free in only a handful of functions. There are higher level functions for proper Allocators. One example is an Arena Allocator. Then if you want a function which may require dynamic memory, you can tell it which allocator to use. It gives you control, generally speaking. You can create a simple string library or builder with an allocator.
Of course an Allocator does not have to use memory on the heap. It can still use on the stack as well.
There are various other patterns to use in the world of memory, especially in C.
I guess there's some history there that I'm not familiar with because JBoss also has a FernFlower decompiler library https://mvnrepository.com/artifact/org.jboss.windup.decompil...
> Examples of Vineflower's output, compared to other decompilers, can be found on the wiki.
[wiki is empty]
:-/
Any plan to support `.dex` in the future? Also curious how you handle inner classes inside JARs.
It seems someone liked it and made a "v2" along with LSP support https://github.com/A-LPG/LPG2#lpg2
https://www.jikesrvm.org/
I was hoping that these days' Java would be "almost" as fast C/C++. Oh well.
I want to hear about the reverse engineering, how you thought the code through. LLMs are boring.
Just write a blogpost at that point.
So "To some people the process leading to a finished project is the most interesting thing about posts like these." is bullshit, that is said by someone who has never used LLM properly. You can achieve it with LLMs. You definitely can, I know, I did, accurately (I double checked).
I will throw it out here, too: https://news.ycombinator.com/item?id=44163063 (My AI skeptic friends are all nuts)
That said, it is probably pointless to argue with full-blown AI-skeptics.
People had lots of great and productive-enhancing experiences with LLMs, you did not, great, that does not reflect the tool, it reflects your way of using the tool.
I will just throw it out here: https://news.ycombinator.com/item?id=44163063 (My AI skeptic friends are all nuts)
Additionally, "interesting" is highly subjective. It could be technically correct, yet uninteresting.
And, yes, I know it's trivial to interface the Python C-API with C++ and quite often better as the 'object model' is very similar but the underlying concept I wanted to explore (guaranteed tailcalls) isn't possible in C++ from what I can tell.
But stupid real-world analogies are stupid.
goto eject; ...more code we are going to ignore, it could be important but nah, ignore it, what could be happen?...
eject: up_through_the_roof();
:D
For example, if your structure contains a reference, and you read an instance of that from disk, then you now have a potentially invalid reference, bypassing Rust's guarantees. Reading a structure of i32 numbers is safe, but it also has endianness footguns.
The zerocopy crate implements traits and gives you a derive macro to mark types as being safe to serialize/deserialize in a safe way.
If someone mentions C, that's not a free invite to start educating them on why they SHOULD use Rust. No one at the party is going to talk to you again that night