Why do simple programs take up so much storage space?

I created a simple hello world program in C like so:

#include <stdio.h>

int main() {
    printf("Hello World!n");
    return 0;

Afterwards, I compiled it on Mac using gcc and dumped it using xxd. With 16 bytes per line (8 words), the compiled program was a total of 3073 lines or 49 424 bytes. Out of all these bytes, only 1 904 of them composed the program while the remaining 47 520 bytes were all zeros. Considering that only approximately 3.9% of the bytes are not zeros, this is a clear example of a waste of space. Is there any way to optimize the size of the executable here? (By the way, I already tried using the -Os compiler option and got no results.)

Edit: I got these numbers by counting lines of hexdump, but within the lines containing actual instructions there were also zeros. I didn't count these bytes as they may be crucial to the execution of the program. (Like the null terminator for the string Hello World!) I only counted full blocks of zeros.

gcc on MacOS generates object and executable files in the Mach-O file format. The file is divided up into multiple segments, each of which has some alignment requirement to make loading more efficient (hence why you get all the zero padding). I took your code and built it on my Mac with gcc, gives me an executable size of 8432 bytes. Yes, xxd gives me a bunch of zeros. Here's the objdump output of the section headers:

$ objdump -section-headers hello

hello:  file format Mach-O 64-bit x86-64

Idx Name          Size      Address          Type
  0 __text        0000002a 0000000100000f50 TEXT 
  1 __stubs       00000006 0000000100000f7a TEXT 
  2 __stub_helper 0000001a 0000000100000f80 TEXT 
  3 __cstring     0000000f 0000000100000f9a DATA 
  4 __unwind_info 00000048 0000000100000fac DATA 
  5 __nl_symbol_ptr 00000010 0000000100001000 DATA 
  6 __la_symbol_ptr 00000008 0000000100001010 DATA 

__text contains the machine code of your program, __cstring contains the literal "Hello World!n", and there's a bunch of metadata associated with each section.

This kind of structure is obviously overkill for a simple program like yours, but simple programs like yours are not the norm. Object and executable file formats have to be able to support dynamic loading, symbol relocation, and other things that require complex structures. There's a minimum level of complexity (and thus size) for any compiled program.

So executable files for "small" programs are larger than you think they should be based on the source code, but realize there's a lot more than just your source code in there.

i've done something like this once. the main source of problems was that c++ is more strict about types, as you suspected. you'll have to add casts where void* are mixed with pointers of other types. like allocating memory:

foo *foo;
foo = malloc(sizeof(*foo));

the above is typical c code, but it'll need a cast in c++:

foo *foo;
foo = (foo*)malloc(sizeof(*foo));

there are new reserved words in c++, such as "class", "and", "bool", "catch", "delete", "explicit", "mutable", "namespace", "new", "operator", "or", "private", "protected", "friend", etc. these cannot be used as variable names, for example.

the above are probably the most common problems when you compile old c code with a c++ compiler. for a complete list of incompatibilities, see incompatibilities between iso c and iso c++.

you also ask about name mangling. in absence of extern "c" wrappers, the c++ compiler will mangle the symbols. it's not a problem as long as you use only a c++ compiler, and don't rely on dlsym() or something like that to pull symbols from libraries.

the problem is two fold. i) train doesn't just fit a model via glm(), it will bootstrap that model, so even with the defaults, train() will do 25 bootstrap samples, which, coupled with problem ii) is the (or a) source of your problem, and ii) train() simply calls the glm() function with its defaults. and those defaults are to store the model frame (argument model = true of ?glm), which includes a copy of the data in model frame style. the object returned by train() already stores a copy of the data in $trainingdata, and the "glm" object in $finalmodel also has a copy of the actual data.

at this point, simply running glm() using train() will be producing 25 copies of the fully expanded model.frame and the original data, which will all need to be held in memory during the resampling process - whether these are held concurrently or consecutively is not immediately clear from a quick look at the code as the resampling happens in an lapply() call. there will also be 25 copies of the raw data.

once the resampling is finished, the returned object will contain 2 copies of the raw data and a full copy of the model.frame. if your training data is large relative to available ram or contains many factors to be expanded in the model.frame, then you could easily be using huge amounts of memory just carrying copies of the data around.

if you add model = false to your train call, that might make a difference. here is a small example using the clotting data in ?glm:

clotting <- data.frame(u = c(5,10,15,20,30,40,60,80,100),
                       lot1 = c(118,58,42,35,27,25,21,19,18),
                       lot2 = c(69,35,26,21,18,16,13,12,12))


> m1 <- train(lot1 ~ log(u), data=clotting, family = gamma, method = "glm", 
+             model = true)
fitting: parameter=none 
aggregating results
fitting model on full training set
> m2 <- train(lot1 ~ log(u), data=clotting, family = gamma, method = "glm",
+             model = false)
fitting: parameter=none 
aggregating results
fitting model on full training set
> object.size(m1)
121832 bytes
> object.size(m2)
116456 bytes
> ## ordinary glm() call:
> m3 <- glm(lot1 ~ log(u), data=clotting, family = gamma)
> object.size(m3)
47272 bytes
> m4 <- glm(lot1 ~ log(u), data=clotting, family = gamma, model = false)
> object.size(m4)
42152 bytes

so there is a size difference in the returned object and memory use during training will be lower. how much lower will depend on whether the internals of train() keep all copies of the model.frame in memory during the resampling process.

the object returned by train() is also significantly larger than that returned by glm() - as mentioned by @dwin in the comments, below.

to take this further, either study the code more closely, or email max kuhn, the maintainer of caret, to enquire about options to reduce the memory footprint.

as @rckoenes said don't show the images with high file size. you need to resize the image before you display it.

uiimage *image = [uiimage imagenamed:@"background.jpg"];
self.backgroundimageview =[self imagewithimage:display scaledtosize:cgsizemake(20, 20)];//give your cgsize of the uiimageview.
[self.view addsubview:self.backgroundimageview];

-(uiimage *)imagewithimage:(uiimage *)image scaledtosize:(cgsize)newsize {
    // in next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for retina resolution).
    // pass 1.0 to force exact pixel size.
    uigraphicsbeginimagecontextwithoptions(newsize, no, 0.0);
    [image drawinrect:cgrectmake(0, 0, newsize.width, newsize.height)];
    uiimage *newimage = uigraphicsgetimagefromcurrentimagecontext();
    return newimage;

you could just check for the name of the compiler:

cc = env['cc']
if cc == 'cl':
  env.append(cppflags = '/wall')
elif cc == 'gcc':
  env.append(ccflags = '-wall')

Tags: C Compiler Construction