Commit 82782ca7 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'x86-kbuild-for-linus' of...

Merge branch 'x86-kbuild-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

* 'x86-kbuild-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (46 commits)
  x86, boot: add new generated files to the appropriate .gitignore files
  x86, boot: correct the calculation of ZO_INIT_SIZE
  x86-64: align __PHYSICAL_START, remove __KERNEL_ALIGN
  x86, boot: correct sanity checks in boot/compressed/misc.c
  x86: add extension fields for bootloader type and version
  x86, defconfig: update kernel position parameters
  x86, defconfig: update to current, no material changes
  x86: make CONFIG_RELOCATABLE the default
  x86: default CONFIG_PHYSICAL_START and CONFIG_PHYSICAL_ALIGN to 16 MB
  x86: document new bzImage fields
  x86, boot: make kernel_alignment adjustable; new bzImage fields
  x86, boot: remove dead code from boot/compressed/head_*.S
  x86, boot: use LOAD_PHYSICAL_ADDR on 64 bits
  x86, boot: make symbols from the main vmlinux available
  x86, boot: determine compressed code offset at compile time
  x86, boot: use appropriate rep string for move and clear
  x86, boot: zero EFLAGS on 32 bits
  x86, boot: set up the decompression stack as early as possible
  x86, boot: straighten out ranges to copy/zero in compressed/head*.S
  x86, boot: stylistic cleanups for boot/compressed/head_64.S
  ...

Fixed trivial conflict in arch/x86/configs/x86_64_defconfig manually
parents f0d5e12b 6799687a
...@@ -50,6 +50,10 @@ Protocol 2.08: (Kernel 2.6.26) Added crc32 checksum and ELF format ...@@ -50,6 +50,10 @@ Protocol 2.08: (Kernel 2.6.26) Added crc32 checksum and ELF format
Protocol 2.09: (Kernel 2.6.26) Added a field of 64-bit physical Protocol 2.09: (Kernel 2.6.26) Added a field of 64-bit physical
pointer to single linked list of struct setup_data. pointer to single linked list of struct setup_data.
Protocol 2.10: (Kernel 2.6.31) Added a protocol for relaxed alignment
beyond the kernel_alignment added, new init_size and
pref_address fields. Added extended boot loader IDs.
**** MEMORY LAYOUT **** MEMORY LAYOUT
The traditional memory map for the kernel loader, used for Image or The traditional memory map for the kernel loader, used for Image or
...@@ -168,12 +172,13 @@ Offset Proto Name Meaning ...@@ -168,12 +172,13 @@ Offset Proto Name Meaning
021C/4 2.00+ ramdisk_size initrd size (set by boot loader) 021C/4 2.00+ ramdisk_size initrd size (set by boot loader)
0220/4 2.00+ bootsect_kludge DO NOT USE - for bootsect.S use only 0220/4 2.00+ bootsect_kludge DO NOT USE - for bootsect.S use only
0224/2 2.01+ heap_end_ptr Free memory after setup end 0224/2 2.01+ heap_end_ptr Free memory after setup end
0226/2 N/A pad1 Unused 0226/1 2.02+(3 ext_loader_ver Extended boot loader version
0227/1 2.02+(3 ext_loader_type Extended boot loader ID
0228/4 2.02+ cmd_line_ptr 32-bit pointer to the kernel command line 0228/4 2.02+ cmd_line_ptr 32-bit pointer to the kernel command line
022C/4 2.03+ ramdisk_max Highest legal initrd address 022C/4 2.03+ ramdisk_max Highest legal initrd address
0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel 0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel
0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not 0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not
0235/1 N/A pad2 Unused 0235/1 2.10+ min_alignment Minimum alignment, as a power of two
0236/2 N/A pad3 Unused 0236/2 N/A pad3 Unused
0238/4 2.06+ cmdline_size Maximum size of the kernel command line 0238/4 2.06+ cmdline_size Maximum size of the kernel command line
023C/4 2.07+ hardware_subarch Hardware subarchitecture 023C/4 2.07+ hardware_subarch Hardware subarchitecture
...@@ -182,6 +187,8 @@ Offset Proto Name Meaning ...@@ -182,6 +187,8 @@ Offset Proto Name Meaning
024C/4 2.08+ payload_length Length of kernel payload 024C/4 2.08+ payload_length Length of kernel payload
0250/8 2.09+ setup_data 64-bit physical pointer to linked list 0250/8 2.09+ setup_data 64-bit physical pointer to linked list
of struct setup_data of struct setup_data
0258/8 2.10+ pref_address Preferred loading address
0260/4 2.10+ init_size Linear memory required during initialization
(1) For backwards compatibility, if the setup_sects field contains 0, the (1) For backwards compatibility, if the setup_sects field contains 0, the
real value is 4. real value is 4.
...@@ -190,6 +197,8 @@ Offset Proto Name Meaning ...@@ -190,6 +197,8 @@ Offset Proto Name Meaning
field are unusable, which means the size of a bzImage kernel field are unusable, which means the size of a bzImage kernel
cannot be determined. cannot be determined.
(3) Ignored, but safe to set, for boot protocols 2.02-2.09.
If the "HdrS" (0x53726448) magic number is not found at offset 0x202, If the "HdrS" (0x53726448) magic number is not found at offset 0x202,
the boot protocol version is "old". Loading an old kernel, the the boot protocol version is "old". Loading an old kernel, the
following parameters should be assumed: following parameters should be assumed:
...@@ -343,18 +352,32 @@ Protocol: 2.00+ ...@@ -343,18 +352,32 @@ Protocol: 2.00+
0xTV here, where T is an identifier for the boot loader and V is 0xTV here, where T is an identifier for the boot loader and V is
a version number. Otherwise, enter 0xFF here. a version number. Otherwise, enter 0xFF here.
For boot loader IDs above T = 0xD, write T = 0xE to this field and
write the extended ID minus 0x10 to the ext_loader_type field.
Similarly, the ext_loader_ver field can be used to provide more than
four bits for the bootloader version.
For example, for T = 0x15, V = 0x234, write:
type_of_loader <- 0xE4
ext_loader_type <- 0x05
ext_loader_ver <- 0x23
Assigned boot loader ids: Assigned boot loader ids:
0 LILO (0x00 reserved for pre-2.00 bootloader) 0 LILO (0x00 reserved for pre-2.00 bootloader)
1 Loadlin 1 Loadlin
2 bootsect-loader (0x20, all other values reserved) 2 bootsect-loader (0x20, all other values reserved)
3 SYSLINUX 3 Syslinux
4 EtherBoot 4 Etherboot/gPXE
5 ELILO 5 ELILO
7 GRUB 7 GRUB
8 U-BOOT 8 U-Boot
9 Xen 9 Xen
A Gujin A Gujin
B Qemu B Qemu
C Arcturus Networks uCbootloader
E Extended (see ext_loader_type)
F Special (0xFF = undefined)
Please contact <hpa@zytor.com> if you need a bootloader ID Please contact <hpa@zytor.com> if you need a bootloader ID
value assigned. value assigned.
...@@ -453,6 +476,35 @@ Protocol: 2.01+ ...@@ -453,6 +476,35 @@ Protocol: 2.01+
Set this field to the offset (from the beginning of the real-mode Set this field to the offset (from the beginning of the real-mode
code) of the end of the setup stack/heap, minus 0x0200. code) of the end of the setup stack/heap, minus 0x0200.
Field name: ext_loader_ver
Type: write (optional)
Offset/size: 0x226/1
Protocol: 2.02+
This field is used as an extension of the version number in the
type_of_loader field. The total version number is considered to be
(type_of_loader & 0x0f) + (ext_loader_ver << 4).
The use of this field is boot loader specific. If not written, it
is zero.
Kernels prior to 2.6.31 did not recognize this field, but it is safe
to write for protocol version 2.02 or higher.
Field name: ext_loader_type
Type: write (obligatory if (type_of_loader & 0xf0) == 0xe0)
Offset/size: 0x227/1
Protocol: 2.02+
This field is used as an extension of the type number in
type_of_loader field. If the type in type_of_loader is 0xE, then
the actual type is (ext_loader_type + 0x10).
This field is ignored if the type in type_of_loader is not 0xE.
Kernels prior to 2.6.31 did not recognize this field, but it is safe
to write for protocol version 2.02 or higher.
Field name: cmd_line_ptr Field name: cmd_line_ptr
Type: write (obligatory) Type: write (obligatory)
Offset/size: 0x228/4 Offset/size: 0x228/4
...@@ -482,11 +534,19 @@ Protocol: 2.03+ ...@@ -482,11 +534,19 @@ Protocol: 2.03+
0x37FFFFFF, you can start your ramdisk at 0x37FE0000.) 0x37FFFFFF, you can start your ramdisk at 0x37FE0000.)
Field name: kernel_alignment Field name: kernel_alignment
Type: read (reloc) Type: read/modify (reloc)
Offset/size: 0x230/4 Offset/size: 0x230/4
Protocol: 2.05+ Protocol: 2.05+ (read), 2.10+ (modify)
Alignment unit required by the kernel (if relocatable_kernel is
true.) A relocatable kernel that is loaded at an alignment
incompatible with the value in this field will be realigned during
kernel initialization.
Alignment unit required by the kernel (if relocatable_kernel is true.) Starting with protocol version 2.10, this reflects the kernel
alignment preferred for optimal performance; it is possible for the
loader to modify this field to permit a lesser alignment. See the
min_alignment and pref_address field below.
Field name: relocatable_kernel Field name: relocatable_kernel
Type: read (reloc) Type: read (reloc)
...@@ -498,6 +558,22 @@ Protocol: 2.05+ ...@@ -498,6 +558,22 @@ Protocol: 2.05+
After loading, the boot loader must set the code32_start field to After loading, the boot loader must set the code32_start field to
point to the loaded code, or to a boot loader hook. point to the loaded code, or to a boot loader hook.
Field name: min_alignment
Type: read (reloc)
Offset/size: 0x235/1
Protocol: 2.10+
This field, if nonzero, indicates as a power of two the minimum
alignment required, as opposed to preferred, by the kernel to boot.
If a boot loader makes use of this field, it should update the
kernel_alignment field with the alignment unit desired; typically:
kernel_alignment = 1 << min_alignment
There may be a considerable performance cost with an excessively
misaligned kernel. Therefore, a loader should typically try each
power-of-two alignment from kernel_alignment down to this alignment.
Field name: cmdline_size Field name: cmdline_size
Type: read Type: read
Offset/size: 0x238/4 Offset/size: 0x238/4
...@@ -582,6 +658,36 @@ Protocol: 2.09+ ...@@ -582,6 +658,36 @@ Protocol: 2.09+
sure to consider the case where the linked list already contains sure to consider the case where the linked list already contains
entries. entries.
Field name: pref_address
Type: read (reloc)
Offset/size: 0x258/8
Protocol: 2.10+
This field, if nonzero, represents a preferred load address for the
kernel. A relocating bootloader should attempt to load at this
address if possible.
A non-relocatable kernel will unconditionally move itself and to run
at this address.
Field name: init_size
Type: read
Offset/size: 0x25c/4
This field indicates the amount of linear contiguous memory starting
at the kernel runtime start address that the kernel needs before it
is capable of examining its memory map. This is not the same thing
as the total amount of memory the kernel needs to boot, but it can
be used by a relocating boot loader to help select a safe load
address for the kernel.
The kernel runtime start address is determined by the following algorithm:
if (relocatable_kernel)
runtime_start = align_up(load_address, kernel_alignment)
else
runtime_start = pref_address
**** THE IMAGE CHECKSUM **** THE IMAGE CHECKSUM
......
obj-$(CONFIG_KVM) += kvm/
# Xen paravirtualization support
obj-$(CONFIG_XEN) += xen/
# lguest paravirtualization support
obj-$(CONFIG_LGUEST_GUEST) += lguest/
obj-y += kernel/
obj-y += mm/
obj-y += crypto/
obj-y += vdso/
obj-$(CONFIG_IA32_EMULATION) += ia32/
...@@ -47,6 +47,11 @@ config X86 ...@@ -47,6 +47,11 @@ config X86
select HAVE_KERNEL_BZIP2 select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_LZMA select HAVE_KERNEL_LZMA
config OUTPUT_FORMAT
string
default "elf32-i386" if X86_32
default "elf64-x86-64" if X86_64
config ARCH_DEFCONFIG config ARCH_DEFCONFIG
string string
default "arch/x86/configs/i386_defconfig" if X86_32 default "arch/x86/configs/i386_defconfig" if X86_32
...@@ -1460,9 +1465,7 @@ config KEXEC_JUMP ...@@ -1460,9 +1465,7 @@ config KEXEC_JUMP
config PHYSICAL_START config PHYSICAL_START
hex "Physical address where the kernel is loaded" if (EMBEDDED || CRASH_DUMP) hex "Physical address where the kernel is loaded" if (EMBEDDED || CRASH_DUMP)
default "0x1000000" if X86_NUMAQ default "0x1000000"
default "0x200000" if X86_64
default "0x100000"
---help--- ---help---
This gives the physical address where the kernel is loaded. This gives the physical address where the kernel is loaded.
...@@ -1481,15 +1484,15 @@ config PHYSICAL_START ...@@ -1481,15 +1484,15 @@ config PHYSICAL_START
to be specifically compiled to run from a specific memory area to be specifically compiled to run from a specific memory area
(normally a reserved region) and this option comes handy. (normally a reserved region) and this option comes handy.
So if you are using bzImage for capturing the crash dump, leave So if you are using bzImage for capturing the crash dump,
the value here unchanged to 0x100000 and set CONFIG_RELOCATABLE=y. leave the value here unchanged to 0x1000000 and set
Otherwise if you plan to use vmlinux for capturing the crash dump CONFIG_RELOCATABLE=y. Otherwise if you plan to use vmlinux
change this value to start of the reserved region (Typically 16MB for capturing the crash dump change this value to start of
0x1000000). In other words, it can be set based on the "X" value as the reserved region. In other words, it can be set based on
specified in the "crashkernel=YM@XM" command line boot parameter the "X" value as specified in the "crashkernel=YM@XM"
passed to the panic-ed kernel. Typically this parameter is set as command line boot parameter passed to the panic-ed
crashkernel=64M@16M. Please take a look at kernel. Please take a look at Documentation/kdump/kdump.txt
Documentation/kdump/kdump.txt for more details about crash dumps. for more details about crash dumps.
Usage of bzImage for capturing the crash dump is recommended as Usage of bzImage for capturing the crash dump is recommended as
one does not have to build two kernels. Same kernel can be used one does not have to build two kernels. Same kernel can be used
...@@ -1502,8 +1505,8 @@ config PHYSICAL_START ...@@ -1502,8 +1505,8 @@ config PHYSICAL_START
Don't change this unless you know what you are doing. Don't change this unless you know what you are doing.
config RELOCATABLE config RELOCATABLE
bool "Build a relocatable kernel (EXPERIMENTAL)" bool "Build a relocatable kernel"
depends on EXPERIMENTAL default y
---help--- ---help---
This builds a kernel image that retains relocation information This builds a kernel image that retains relocation information
so it can be loaded someplace besides the default 1MB. so it can be loaded someplace besides the default 1MB.
...@@ -1518,12 +1521,16 @@ config RELOCATABLE ...@@ -1518,12 +1521,16 @@ config RELOCATABLE
it has been loaded at and the compile time physical address it has been loaded at and the compile time physical address
(CONFIG_PHYSICAL_START) is ignored. (CONFIG_PHYSICAL_START) is ignored.
# Relocation on x86-32 needs some additional build support
config X86_NEED_RELOCS
def_bool y
depends on X86_32 && RELOCATABLE
config PHYSICAL_ALIGN config PHYSICAL_ALIGN
hex hex
prompt "Alignment value to which kernel should be aligned" if X86_32 prompt "Alignment value to which kernel should be aligned" if X86_32
default "0x100000" if X86_32 default "0x1000000"
default "0x200000" if X86_64 range 0x2000 0x1000000
range 0x2000 0x400000
---help--- ---help---
This value puts the alignment restrictions on physical address This value puts the alignment restrictions on physical address
where kernel is loaded and run from. Kernel is compiled for an where kernel is loaded and run from. Kernel is compiled for an
......
...@@ -7,8 +7,6 @@ else ...@@ -7,8 +7,6 @@ else
KBUILD_DEFCONFIG := $(ARCH)_defconfig KBUILD_DEFCONFIG := $(ARCH)_defconfig
endif endif
core-$(CONFIG_KVM) += arch/x86/kvm/
# BITS is used as extension for files which are available in a 32 bit # BITS is used as extension for files which are available in a 32 bit
# and a 64 bit version to simplify shared Makefiles. # and a 64 bit version to simplify shared Makefiles.
# e.g.: obj-y += foo_$(BITS).o # e.g.: obj-y += foo_$(BITS).o
...@@ -118,21 +116,8 @@ head-y += arch/x86/kernel/init_task.o ...@@ -118,21 +116,8 @@ head-y += arch/x86/kernel/init_task.o
libs-y += arch/x86/lib/ libs-y += arch/x86/lib/
# Sub architecture files that needs linking first # See arch/x86/Kbuild for content of core part of the kernel
core-y += $(fcore-y) core-y += arch/x86/
# Xen paravirtualization support
core-$(CONFIG_XEN) += arch/x86/xen/
# lguest paravirtualization support
core-$(CONFIG_LGUEST_GUEST) += arch/x86/lguest/
core-y += arch/x86/kernel/
core-y += arch/x86/mm/
core-y += arch/x86/crypto/
core-y += arch/x86/vdso/
core-$(CONFIG_IA32_EMULATION) += arch/x86/ia32/
# drivers-y are linked after core-y # drivers-y are linked after core-y
drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/ drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/
......
...@@ -3,6 +3,8 @@ bzImage ...@@ -3,6 +3,8 @@ bzImage
cpustr.h cpustr.h
mkcpustr mkcpustr
offsets.h offsets.h
voffset.h
zoffset.h
setup setup
setup.bin setup.bin
setup.elf setup.elf
...@@ -86,19 +86,27 @@ $(obj)/vmlinux.bin: $(obj)/compressed/vmlinux FORCE ...@@ -86,19 +86,27 @@ $(obj)/vmlinux.bin: $(obj)/compressed/vmlinux FORCE
SETUP_OBJS = $(addprefix $(obj)/,$(setup-y)) SETUP_OBJS = $(addprefix $(obj)/,$(setup-y))
sed-offsets := -e 's/^00*/0/' \ sed-voffset := -e 's/^\([0-9a-fA-F]*\) . \(_text\|_end\)$$/\#define VO_\2 0x\1/p'
-e 's/^\([0-9a-fA-F]*\) . \(input_data\|input_data_end\)$$/\#define \2 0x\1/p'
quiet_cmd_offsets = OFFSETS $@ quiet_cmd_voffset = VOFFSET $@
cmd_offsets = $(NM) $< | sed -n $(sed-offsets) > $@ cmd_voffset = $(NM) $< | sed -n $(sed-voffset) > $@
$(obj)/offsets.h: $(obj)/compressed/vmlinux FORCE targets += voffset.h
$(call if_changed,offsets) $(obj)/voffset.h: vmlinux FORCE
$(call if_changed,voffset)
sed-zoffset := -e 's/^\([0-9a-fA-F]*\) . \(startup_32\|input_data\|_end\|z_.*\)$$/\#define ZO_\2 0x\1/p'
quiet_cmd_zoffset = ZOFFSET $@
cmd_zoffset = $(NM) $< | sed -n $(sed-zoffset) > $@
targets += zoffset.h
$(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE
$(call if_changed,zoffset)
targets += offsets.h
AFLAGS_header.o += -I$(obj) AFLAGS_header.o += -I$(obj)
$(obj)/header.o: $(obj)/offsets.h $(obj)/header.o: $(obj)/voffset.h $(obj)/zoffset.h
LDFLAGS_setup.elf := -T LDFLAGS_setup.elf := -T
$(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE $(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE
......
relocs relocs
vmlinux.bin.all vmlinux.bin.all
vmlinux.relocs vmlinux.relocs
vmlinux.lds
mkpiggy
piggy.S
...@@ -19,7 +19,9 @@ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ ...@@ -19,7 +19,9 @@ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
LDFLAGS := -m elf_$(UTS_MACHINE) LDFLAGS := -m elf_$(UTS_MACHINE)
LDFLAGS_vmlinux := -T LDFLAGS_vmlinux := -T
$(obj)/vmlinux: $(src)/vmlinux_$(BITS).lds $(obj)/head_$(BITS).o $(obj)/misc.o $(obj)/piggy.o FORCE hostprogs-y := mkpiggy
$(obj)/vmlinux: $(obj)/vmlinux.lds $(obj)/head_$(BITS).o $(obj)/misc.o $(obj)/piggy.o FORCE
$(call if_changed,ld) $(call if_changed,ld)
@: @:
...@@ -29,7 +31,7 @@ $(obj)/vmlinux.bin: vmlinux FORCE ...@@ -29,7 +31,7 @@ $(obj)/vmlinux.bin: vmlinux FORCE
targets += vmlinux.bin.all vmlinux.relocs relocs targets += vmlinux.bin.all vmlinux.relocs relocs
hostprogs-$(CONFIG_X86_32) += relocs hostprogs-$(CONFIG_X86_NEED_RELOCS) += relocs
quiet_cmd_relocs = RELOCS $@ quiet_cmd_relocs = RELOCS $@
cmd_relocs = $(obj)/relocs $< > $@;$(obj)/relocs --abs-relocs $< cmd_relocs = $(obj)/relocs $< > $@;$(obj)/relocs --abs-relocs $<
...@@ -37,46 +39,22 @@ $(obj)/vmlinux.relocs: vmlinux $(obj)/relocs FORCE ...@@ -37,46 +39,22 @@ $(obj)/vmlinux.relocs: vmlinux $(obj)/relocs FORCE
$(call if_changed,relocs) $(call if_changed,relocs)
vmlinux.bin.all-y := $(obj)/vmlinux.bin vmlinux.bin.all-y := $(obj)/vmlinux.bin
vmlinux.bin.all-$(CONFIG_RELOCATABLE) += $(obj)/vmlinux.relocs vmlinux.bin.all-$(CONFIG_X86_NEED_RELOCS) += $(obj)/vmlinux.relocs
quiet_cmd_relocbin = BUILD $@
cmd_relocbin = cat $(filter-out FORCE,$^) > $@
$(obj)/vmlinux.bin.all: $(vmlinux.bin.all-y) FORCE
$(call if_changed,relocbin)
ifeq ($(CONFIG_X86_32),y)
ifdef CONFIG_RELOCATABLE $(obj)/vmlinux.bin.gz: $(vmlinux.bin.all-y) FORCE
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,gzip)
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,lzma)
else
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip) $(call if_changed,gzip)
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE $(obj)/vmlinux.bin.bz2: $(vmlinux.bin.all-y) FORCE
$(call if_changed,bzip2) $(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE $(obj)/vmlinux.bin.lzma: $(vmlinux.bin.all-y) FORCE
$(call if_changed,lzma) $(call if_changed,lzma)
endif
LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T
else suffix-$(CONFIG_KERNEL_GZIP) := gz
suffix-$(CONFIG_KERNEL_BZIP2) := bz2
suffix-$(CONFIG_KERNEL_LZMA) := lzma
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE quiet_cmd_mkpiggy = MKPIGGY $@
$(call if_changed,gzip) cmd_mkpiggy = $(obj)/mkpiggy $< > $@ || ( rm -f $@ ; false )
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE
$(call if_changed,lzma)
LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T
endif
suffix_$(CONFIG_KERNEL_GZIP) = gz targets += piggy.S
suffix_$(CONFIG_KERNEL_BZIP2) = bz2 $(obj)/piggy.S: $(obj)/vmlinux.bin.$(suffix-y) $(obj)/mkpiggy FORCE
suffix_$(CONFIG_KERNEL_LZMA) = lzma $(call if_changed,mkpiggy)
$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.$(suffix_y) FORCE
$(call if_changed,ld)
...@@ -12,16 +12,16 @@ ...@@ -12,16 +12,16 @@
* the page directory. [According to comments etc elsewhere on a compressed * the page directory. [According to comments etc elsewhere on a compressed
* kernel it will end up at 0x1000 + 1Mb I hope so as I assume this. - AC] * kernel it will end up at 0x1000 + 1Mb I hope so as I assume this. - AC]
* *
* Page 0 is deliberately kept safe, since System Management Mode code in * Page 0 is deliberately kept safe, since System Management Mode code in
* laptops may need to access the BIOS data stored there. This is also * laptops may need to access the BIOS data stored there. This is also
* useful for future device drivers that either access the BIOS via VM86 * useful for future device drivers that either access the BIOS via VM86
* mode. * mode.
*/ */
/* /*
* High loaded stuff by Hans Lermen & Werner Almesberger, Feb. 1996 * High loaded stuff by Hans Lermen & Werner Almesberger, Feb. 1996
*/ */
.text .text
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/segment.h> #include <asm/segment.h>
...@@ -29,161 +29,151 @@ ...@@ -29,161 +29,151 @@
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
.section ".text.head","ax",@progbits .section ".text.head","ax",@progbits
ENTRY(startup_32) ENTRY(startup_32)
cld cld
/* test KEEP_SEGMENTS flag to see if the bootloader is asking /*
* us to not reload segments */ * Test KEEP_SEGMENTS flag to see if the bootloader is asking
testb $(1<<6), BP_loadflags(%esi) * us to not reload segments
jnz 1f */
testb $(1<<6), BP_loadflags(%esi)
jnz 1f
cli cli
movl $(__BOOT_DS),%eax movl $__BOOT_DS, %eax
movl %eax,%ds movl %eax, %ds
movl %eax,%es movl %eax, %es
movl %eax,%fs movl %eax, %fs
movl %eax,%gs movl %eax, %gs
movl %eax,%ss movl %eax, %ss
1: 1:
/* Calculate the delta between where we were compiled to run /*
* Calculate the delta between where we were compiled to run
* at and where we were actually loaded at. This can only be done * at and where we were actually loaded at. This can only be done
* with a short local call on x86. Nothing else will tell us what * with a short local call on x86. Nothing else will tell us what
* address we are running at. The reserved chunk of the real-mode * address we are running at. The reserved chunk of the real-mode
* data at 0x1e4 (defined as a scratch field) are used as the stack * data at 0x1e4 (defined as a scratch field) are used as the stack
* for this calculation. Only 4 bytes are needed. * for this calculation. Only 4 bytes are needed.
*/ */
leal (0x1e4+4)(%esi), %esp leal (BP_scratch+4)(%esi), %esp
call 1f call 1f
1: popl %ebp 1: popl %ebp
subl $1b, %ebp subl $1b, %ebp
/* %ebp contains the address we are loaded at by the boot loader and %ebx /*
* %ebp contains the address we are loaded at by the boot loader and %ebx
* contains the address where we should move the kernel image temporarily * contains the address where we should move the kernel image temporarily
* for safe in-place decompression. * for safe in-place decompression.
*/ */
#ifdef CONFIG_RELOCATABLE #ifdef CONFIG_RELOCATABLE
movl %ebp, %ebx movl %ebp, %ebx
addl $(CONFIG_PHYSICAL_ALIGN - 1), %ebx movl BP_kernel_alignment(%esi), %eax
andl $(~(CONFIG_PHYSICAL_ALIGN - 1)), %ebx decl %eax
addl %eax, %ebx
notl %eax
andl %eax, %ebx
#else #else
movl $LOAD_PHYSICAL_ADDR, %ebx movl $LOAD_PHYSICAL_ADDR, %ebx
#endif #endif
/* Replace the compressed data size with the uncompressed size */ /* Target address to relocate to for decompression */
subl input_len(%ebp), %ebx addl $z_extract_offset, %ebx
movl output_len(%ebp), %eax
addl %eax, %ebx /* Set up the stack */
/* Add 8 bytes for every 32K input block */ leal boot_stack_end(%ebx), %esp
shrl $12, %eax
addl %eax, %ebx /* Zero EFLAGS */
/* Add 32K + 18 bytes of extra slack */ pushl $0
addl $(32768 + 18), %ebx popfl
/* Align on a 4K boundary */
addl $4095, %ebx /*
andl $~4095, %ebx * Copy the compressed kernel to the end of our buffer
/* Copy the compressed kernel to the end of our buffer
* where decompression in place becomes safe. * where decompression in place becomes safe.
*/ */
pushl %esi pushl %esi
leal _end(%ebp), %esi leal (_bss-4)(%ebp), %esi
leal _end(%ebx), %edi leal (_bss-4)(%ebx), %edi
movl $(_end - startup_32), %ecx movl $(_bss - startup_32), %ecx
shrl $2, %ecx
std std
rep rep movsl
movsb
cld cld
popl %esi popl %esi
/* Compute the kernel start address.
*/
#ifdef CONFIG_RELOCATABLE
addl $(CONFIG_PHYSICAL_ALIGN - 1), %ebp
andl $(~(CONFIG_PHYSICAL_ALIGN - 1)), %ebp
#else
movl $LOAD_PHYSICAL_ADDR, %ebp
#endif
/* /*
* Jump to the relocated address. * Jump to the relocated address.
*/ */
leal relocated(%ebx), %eax leal relocated(%ebx), %eax
jmp *%eax jmp *%eax
ENDPROC(startup_32) ENDPROC(startup_32)
.section ".text" .text
relocated: relocated:
/* /*
* Clear BSS * Clear BSS (stack is currently empty)
*/
xorl %eax,%eax
leal _edata(%ebx),%edi
leal _end(%ebx), %ecx
subl %edi,%ecx
cld
rep
stosb
/*
* Setup the stack for the decompressor
*/ */
leal boot_stack_end(%ebx), %esp xorl %eax, %eax
leal _bss(%ebx), %edi
leal _ebss(%ebx), %ecx
subl %edi, %ecx
shrl $2, %ecx
rep stosl
/* /*
* Do the decompression, and jump to the new kernel.. * Do the decompression, and jump to the new kernel..
*/ */
movl output_len(%ebx), %eax leal z_extract_offset_negative(%ebx), %ebp
pushl %eax /* push arguments for decompress_kernel: */
# push arguments for decompress_kernel: pushl %ebp /* output address */
pushl %ebp # output address pushl $z_input_len /* input_len */
movl input_len(%ebx), %eax leal input_data(%ebx), %eax
pushl %eax # input_len pushl %eax /* input_data */
leal input_data(%ebx), %eax leal boot_heap(%ebx), %eax
pushl %eax # input_data pushl %eax /* heap area */
leal boot_heap(%ebx), %eax pushl %esi /* real mode pointer */
pushl %eax # heap area call decompress_kernel
pushl %esi # real mode pointer addl $20, %esp
call decompress_kernel
addl $20, %esp
popl %ecx
#if CONFIG_RELOCATABLE #if CONFIG_RELOCATABLE
/* Find the address of the relocations. /*
* Find the address of the relocations.
*/ */
movl %ebp, %edi leal z_output_len(%ebp), %edi
addl %ecx, %edi
/* Calculate the delta between where vmlinux was compiled to run /*
* Calculate the delta between where vmlinux was compiled to run
* and where it was actually loaded. * and where it was actually loaded.
*/ */
movl %ebp, %ebx movl %ebp, %ebx
subl $LOAD_PHYSICAL_ADDR, %ebx subl $LOAD_PHYSICAL_ADDR, %ebx
jz 2f /* Nothing to be done if loaded at compiled addr. */ jz 2f /* Nothing to be done if loaded at compiled addr. */
/* /*
* Process relocations. * Process relocations.
*/ */
1: subl $4, %edi 1: subl $4, %edi
movl 0(%edi), %ecx movl (%edi), %ecx
testl %ecx, %ecx testl %ecx, %ecx
jz 2f jz 2f
addl %ebx, -__PAGE_OFFSET(%ebx, %ecx) addl %ebx, -__PAGE_OFFSET(%ebx, %ecx)
jmp 1b jmp 1b
2: 2:
#endif #endif
/* /*
* Jump to the decompressed kernel. * Jump to the decompressed kernel.
*/ */
xorl %ebx,%ebx xorl %ebx, %ebx
jmp *%ebp jmp *%ebp
.bss /*
/* Stack and heap for uncompression */ * Stack and heap for uncompression
.balign 4 */
.bss
.balign 4
boot_heap: boot_heap:
.fill BOOT_HEAP_SIZE, 1, 0 .fill BOOT_HEAP_SIZE, 1, 0
boot_stack: boot_stack:
......
...@@ -21,8 +21,8 @@ ...@@ -21,8 +21,8 @@
/* /*
* High loaded stuff by Hans Lermen & Werner Almesberger, Feb. 1996 * High loaded stuff by Hans Lermen & Werner Almesberger, Feb. 1996
*/ */
.code32 .code32
.text .text
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/segment.h> #include <asm/segment.h>
...@@ -33,12 +33,14 @@ ...@@ -33,12 +33,14 @@
#include <asm/processor-flags.h> #include <asm/processor-flags.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
.section ".text.head" .section ".text.head"
.code32 .code32
ENTRY(startup_32) ENTRY(startup_32)
cld cld
/* test KEEP_SEGMENTS flag to see if the bootloader is asking /*
* us to not reload segments */ * Test KEEP_SEGMENTS flag to see if the bootloader is asking
* us to not reload segments
*/
testb $(1<<6), BP_loadflags(%esi) testb $(1<<6), BP_loadflags(%esi)
jnz 1f jnz 1f
...@@ -49,14 +51,15 @@ ENTRY(startup_32) ...@@ -49,14 +51,15 @@ ENTRY(startup_32)
movl %eax, %ss movl %eax, %ss
1: 1:
/* Calculate the delta between where we were compiled to run /*
* Calculate the delta between where we were compiled to run
* at and where we were actually loaded at. This can only be done * at and where we were actually loaded at. This can only be done
* with a short local call on x86. Nothing else will tell us what * with a short local call on x86. Nothing else will tell us what
* address we are running at. The reserved chunk of the real-mode * address we are running at. The reserved chunk of the real-mode
* data at 0x1e4 (defined as a scratch field) are used as the stack * data at 0x1e4 (defined as a scratch field) are used as the stack
* for this calculation. Only 4 bytes are needed. * for this calculation. Only 4 bytes are needed.
*/ */
leal (0x1e4+4)(%esi), %esp leal (BP_scratch+4)(%esi), %esp
call 1f call 1f
1: popl %ebp 1: popl %ebp
subl $1b, %ebp subl $1b, %ebp
...@@ -70,32 +73,28 @@ ENTRY(startup_32) ...@@ -70,32 +73,28 @@ ENTRY(startup_32)
testl %eax, %eax testl %eax, %eax
jnz no_longmode jnz no_longmode
/* Compute the delta between where we were compiled to run at /*
* Compute the delta between where we were compiled to run at
* and where the code will actually run at. * and where the code will actually run at.
*/ *
/* %ebp contains the address we are loaded at by the boot loader and %ebx * %ebp contains the address we are loaded at by the boot loader and %ebx
* contains the address where we should move the kernel image temporarily * contains the address where we should move the kernel image temporarily
* for safe in-place decompression. * for safe in-place decompression.
*/ */
#ifdef CONFIG_RELOCATABLE #ifdef CONFIG_RELOCATABLE
movl %ebp, %ebx movl %ebp, %ebx
addl $(PMD_PAGE_SIZE -1), %ebx movl BP_kernel_alignment(%esi), %eax
andl $PMD_PAGE_MASK, %ebx decl %eax
addl %eax, %ebx
notl %eax
andl %eax, %ebx
#else #else
movl $CONFIG_PHYSICAL_START, %ebx movl $LOAD_PHYSICAL_ADDR, %ebx
#endif #endif
/* Replace the compressed data size with the uncompressed size */ /* Target address to relocate to for decompression */
subl input_len(%ebp), %ebx addl $z_extract_offset, %ebx
movl output_len(%ebp), %eax
addl %eax, %ebx
/* Add 8 bytes for every 32K input block */
shrl $12, %eax
addl %eax, %ebx
/* Add 32K + 18 bytes of extra slack and align on a 4K boundary */
addl $(32768 + 18 + 4095), %ebx
andl $~4095, %ebx
/* /*
* Prepare for entering 64 bit mode * Prepare for entering 64 bit mode
...@@ -114,7 +113,7 @@ ENTRY(startup_32) ...@@ -114,7 +113,7 @@ ENTRY(startup_32)
/* /*
* Build early 4G boot pagetable * Build early 4G boot pagetable
*/ */
/* Initialize Page tables to 0*/ /* Initialize Page tables to 0 */
leal pgtable(%ebx), %edi leal pgtable(%ebx), %edi
xorl %eax, %eax xorl %eax, %eax
movl $((4096*6)/4), %ecx movl $((4096*6)/4), %ecx
...@@ -155,7 +154,8 @@ ENTRY(startup_32) ...@@ -155,7 +154,8 @@ ENTRY(startup_32)
btsl $_EFER_LME, %eax btsl $_EFER_LME, %eax
wrmsr wrmsr
/* Setup for the jump to 64bit mode /*
* Setup for the jump to 64bit mode
* *
* When the jump is performend we will be in long mode but * When the jump is performend we will be in long mode but
* in 32bit compatibility mode with EFER.LME = 1, CS.L = 0, CS.D = 1 * in 32bit compatibility mode with EFER.LME = 1, CS.L = 0, CS.D = 1
...@@ -184,7 +184,8 @@ no_longmode: ...@@ -184,7 +184,8 @@ no_longmode:
#include "../../kernel/verify_cpu_64.S" #include "../../kernel/verify_cpu_64.S"
/* Be careful here startup_64 needs to be at a predictable /*
* Be careful here startup_64 needs to be at a predictable
* address so I can export it in an ELF header. Bootloaders * address so I can export it in an ELF header. Bootloaders
* should look at the ELF header to find this address, as * should look at the ELF header to find this address, as
* it may change in the future. * it may change in the future.
...@@ -192,7 +193,8 @@ no_longmode: ...@@ -192,7 +193,8 @@ no_longmode:
.code64 .code64
.org 0x200 .org 0x200
ENTRY(startup_64) ENTRY(startup_64)
/* We come here either from startup_32 or directly from a /*
* We come here either from startup_32 or directly from a
* 64bit bootloader. If we come here from a bootloader we depend on * 64bit bootloader. If we come here from a bootloader we depend on
* an identity mapped page table being provied that maps our * an identity mapped page table being provied that maps our
* entire text+data+bss and hopefully all of memory. * entire text+data+bss and hopefully all of memory.
...@@ -209,50 +211,54 @@ ENTRY(startup_64) ...@@ -209,50 +211,54 @@ ENTRY(startup_64)
movl $0x20, %eax movl $0x20, %eax
ltr %ax ltr %ax
/* Compute the decompressed kernel start address. It is where /*
* Compute the decompressed kernel start address. It is where
* we were loaded at aligned to a 2M boundary. %rbp contains the * we were loaded at aligned to a 2M boundary. %rbp contains the
* decompressed kernel start address. * decompressed kernel start address.
* *
* If it is a relocatable kernel then decompress and run the kernel * If it is a relocatable kernel then decompress and run the kernel
* from load address aligned to 2MB addr, otherwise decompress and * from load address aligned to 2MB addr, otherwise decompress and
* run the kernel from CONFIG_PHYSICAL_START * run the kernel from LOAD_PHYSICAL_ADDR
*
* We cannot rely on the calculation done in 32-bit mode, since we
* may have been invoked via the 64-bit entry point.
*/ */
/* Start with the delta to where the kernel will run at. */ /* Start with the delta to where the kernel will run at. */
#ifdef CONFIG_RELOCATABLE #ifdef CONFIG_RELOCATABLE
leaq startup_32(%rip) /* - $startup_32 */, %rbp leaq startup_32(%rip) /* - $startup_32 */, %rbp
addq $(PMD_PAGE_SIZE - 1), %rbp movl BP_kernel_alignment(%rsi), %eax
andq $PMD_PAGE_MASK, %rbp decl %eax
movq %rbp, %rbx addq %rax, %rbp
notq %rax
andq %rax, %rbp
#else #else
movq $CONFIG_PHYSICAL_START, %rbp movq $LOAD_PHYSICAL_ADDR, %rbp
movq %rbp, %rbx
#endif #endif
/* Replace the compressed data size with the uncompressed size */ /* Target address to relocate to for decompression */
movl input_len(%rip), %eax leaq z_extract_offset(%rbp), %rbx
subq %rax, %rbx
movl output_len(%rip), %eax /* Set up the stack */
addq %rax, %rbx leaq boot_stack_end(%rbx), %rsp
/* Add 8 bytes for every 32K input block */
shrq $12, %rax /* Zero EFLAGS */
addq %rax, %rbx pushq $0
/* Add 32K + 18 bytes of extra slack and align on a 4K boundary */ popfq
addq $(32768 + 18 + 4095), %rbx
andq $~4095, %rbx /*
* Copy the compressed kernel to the end of our buffer
/* Copy the compressed kernel to the end of our buffer
* where decompression in place becomes safe. * where decompression in place becomes safe.
*/ */
leaq _end_before_pgt(%rip), %r8 pushq %rsi
leaq _end_before_pgt(%rbx), %r9 leaq (_bss-8)(%rip), %rsi
movq $_end_before_pgt /* - $startup_32 */, %rcx leaq (_bss-8)(%rbx), %rdi
1: subq $8, %r8 movq $_bss /* - $startup_32 */, %rcx
subq $8, %r9 shrq $3, %rcx
movq 0(%r8), %rax std
movq %rax, 0(%r9) rep movsq
subq $8, %rcx cld
jnz 1b popq %rsi
/* /*
* Jump to the relocated address. * Jump to the relocated address.
...@@ -260,37 +266,28 @@ ENTRY(startup_64) ...@@ -260,37 +266,28 @@ ENTRY(startup_64)
leaq relocated(%rbx), %rax leaq relocated(%rbx), %rax
jmp *%rax jmp *%rax
.section ".text" .text
relocated: relocated:
/* /*
* Clear BSS * Clear BSS (stack is currently empty)
*/ */
xorq %rax, %rax xorl %eax, %eax
leaq _edata(%rbx), %rdi leaq _bss(%rip), %rdi
leaq _end_before_pgt(%rbx), %rcx leaq _ebss(%rip), %rcx
subq %rdi, %rcx subq %rdi, %rcx
cld shrq $3, %rcx
rep rep stosq
stosb
/* Setup the stack */
leaq boot_stack_end(%rip), %rsp
/* zero EFLAGS after setting rsp */
pushq $0
popfq
/* /*
* Do the decompression, and jump to the new kernel.. * Do the decompression, and jump to the new kernel..
*/ */
pushq %rsi # Save the real mode argument pushq %rsi /* Save the real mode argument */
movq %rsi, %rdi # real mode address movq %rsi, %rdi /* real mode address */
leaq boot_heap(%rip), %rsi # malloc area for uncompression leaq boot_heap(%rip), %rsi /* malloc area for uncompression */
leaq input_data(%rip), %rdx # input_data leaq input_data(%rip), %rdx /* input_data */
movl input_len(%rip), %eax movl $z_input_len, %ecx /* input_len */
movq %rax, %rcx # input_len movq %rbp, %r8 /* output target address */
movq %rbp, %r8 # output
call decompress_kernel call decompress_kernel
popq %rsi popq %rsi
...@@ -311,11 +308,21 @@ gdt: ...@@ -311,11 +308,21 @@ gdt:
.quad 0x0000000000000000 /* TS continued */ .quad 0x0000000000000000 /* TS continued */
gdt_end: gdt_end:
.bss /*
/* Stack and heap for uncompression */ * Stack and heap for uncompression
.balign 4 */
.bss
.balign 4
boot_heap: boot_heap:
.fill BOOT_HEAP_SIZE, 1, 0 .fill BOOT_HEAP_SIZE, 1, 0
boot_stack: boot_stack:
.fill BOOT_STACK_SIZE, 1, 0 .fill BOOT_STACK_SIZE, 1, 0
boot_stack_end: boot_stack_end:
/*
* Space for page tables (not in .bss so not zeroed)
*/
.section ".pgtable","a",@nobits
.balign 4096
pgtable:
.fill 6*4096, 1, 0
...@@ -325,20 +325,18 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap, ...@@ -325,20 +325,18 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap,
free_mem_ptr = heap; /* Heap */ free_mem_ptr = heap; /* Heap */
free_mem_end_ptr = heap + BOOT_HEAP_SIZE; free_mem_end_ptr = heap + BOOT_HEAP_SIZE;
if ((unsigned long)output & (MIN_KERNEL_ALIGN - 1))
error("Destination address inappropriately aligned");
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
if ((unsigned long)output & (__KERNEL_ALIGN - 1)) if (heap > 0x3fffffffffffUL)
error("Destination address not 2M aligned");
if ((unsigned long)output >= 0xffffffffffUL)
error("Destination address too large"); error("Destination address too large");
#else #else
if ((u32)output & (CONFIG_PHYSICAL_ALIGN - 1))
error("Destination address not CONFIG_PHYSICAL_ALIGN aligned");
if (heap > ((-__PAGE_OFFSET-(512<<20)-1) & 0x7fffffff)) if (heap > ((-__PAGE_OFFSET-(512<<20)-1) & 0x7fffffff))
error("Destination address too large"); error("Destination address too large");
#endif
#ifndef CONFIG_RELOCATABLE #ifndef CONFIG_RELOCATABLE
if ((u32)output != LOAD_PHYSICAL_ADDR) if ((unsigned long)output != LOAD_PHYSICAL_ADDR)
error("Wrong destination address"); error("Wrong destination address");
#endif
#endif #endif
if (!quiet) if (!quiet)
......
/* ----------------------------------------------------------------------- *
*
* Copyright (C) 2009 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License version
* 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
* 02110-1301, USA.
*
* H. Peter Anvin <hpa@linux.intel.com>
*
* ----------------------------------------------------------------------- */
/*
* Compute the desired load offset from a compressed program; outputs
* a small assembly wrapper with the appropriate symbols defined.
*/
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <inttypes.h>
static uint32_t getle32(const void *p)
{
const uint8_t *cp = p;
return (uint32_t)cp[0] + ((uint32_t)cp[1] << 8) +
((uint32_t)cp[2] << 16) + ((uint32_t)cp[3] << 24);
}
int main(int argc, char *argv[])
{
uint32_t olen;
long ilen;
unsigned long offs;
FILE *f;
if (argc < 2) {
fprintf(stderr, "Usage: %s compressed_file\n", argv[0]);
return 1;
}
/* Get the information for the compressed kernel image first */
f = fopen(argv[1], "r");
if (!f) {
perror(argv[1]);
return 1;
}
if (fseek(f, -4L, SEEK_END)) {
perror(argv[1]);
}
fread(&olen, sizeof olen, 1, f);
ilen = ftell(f);
olen = getle32(&olen);
fclose(f);
/*
* Now we have the input (compressed) and output (uncompressed)
* sizes, compute the necessary decompression offset...
*/
offs = (olen > ilen) ? olen - ilen : 0;
offs += olen >> 12; /* Add 8 bytes for each 32K block */
offs += 32*1024 + 18; /* Add 32K + 18 bytes slack */
offs = (offs+4095) & ~4095; /* Round to a 4K boundary */
printf(".section \".rodata.compressed\",\"a\",@progbits\n");
printf(".globl z_input_len\n");
printf("z_input_len = %lu\n", ilen);
printf(".globl z_output_len\n");
printf("z_output_len = %lu\n", (unsigned long)olen);
printf(".globl z_extract_offset\n");
printf("z_extract_offset = 0x%lx\n", offs);
/* z_extract_offset_negative allows simplification of head_32.S */
printf(".globl z_extract_offset_negative\n");
printf("z_extract_offset_negative = -0x%lx\n", offs);
printf(".globl input_data, input_data_end\n");
printf("input_data:\n");
printf(".incbin \"%s\"\n", argv[1]);
printf("input_data_end:\n");
return 0;
}
OUTPUT_FORMAT("elf64-x86-64", "elf64-x86-64", "elf64-x86-64") OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT, CONFIG_OUTPUT_FORMAT, CONFIG_OUTPUT_FORMAT)
#undef i386
#include <asm/page_types.h>
#ifdef CONFIG_X86_64
OUTPUT_ARCH(i386:x86-64) OUTPUT_ARCH(i386:x86-64)
ENTRY(startup_64) ENTRY(startup_64)
#else
OUTPUT_ARCH(i386)
ENTRY(startup_32)
#endif
SECTIONS SECTIONS
{ {
/* Be careful parts of head_64.S assume startup_32 is at /* Be careful parts of head_64.S assume startup_32 is at
...@@ -33,16 +44,22 @@ SECTIONS ...@@ -33,16 +44,22 @@ SECTIONS
*(.data.*) *(.data.*)
_edata = . ; _edata = . ;
} }
. = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
.bss : { .bss : {
_bss = . ; _bss = . ;
*(.bss) *(.bss)
*(.bss.*) *(.bss.*)
*(COMMON) *(COMMON)
. = ALIGN(8); . = ALIGN(8); /* For convenience during zeroing */
_end_before_pgt = . ;
. = ALIGN(4096);
pgtable = . ;
. = . + 4096 * 6;
_ebss = .; _ebss = .;
} }
#ifdef CONFIG_X86_64
. = ALIGN(PAGE_SIZE);
.pgtable : {
_pgtable = . ;
*(.pgtable)
_epgtable = . ;
}
#endif
_end = .;
} }
SECTIONS
{
.rodata.compressed : {
input_len = .;
LONG(input_data_end - input_data) input_data = .;
*(.data)
output_len = . - 4;
input_data_end = .;
}
}
OUTPUT_FORMAT("elf32-i386", "elf32-i386", "elf32-i386")
OUTPUT_ARCH(i386)
ENTRY(startup_32)
SECTIONS
{
/* Be careful parts of head_32.S assume startup_32 is at
* address 0.
*/
. = 0;
.text.head : {
_head = . ;
*(.text.head)
_ehead = . ;
}
.rodata.compressed : {
*(.rodata.compressed)
}
.text : {
_text = .; /* Text */
*(.text)
*(.text.*)
_etext = . ;
}
.rodata : {
_rodata = . ;
*(.rodata) /* read-only data */
*(.rodata.*)
_erodata = . ;
}
.data : {
_data = . ;
*(.data)
*(.data.*)
_edata = . ;
}
.bss : {
_bss = . ;
*(.bss)
*(.bss.*)
*(COMMON)
_end = . ;
}
}
...@@ -22,7 +22,8 @@ ...@@ -22,7 +22,8 @@
#include <asm/page_types.h> #include <asm/page_types.h>
#include <asm/setup.h> #include <asm/setup.h>
#include "boot.h" #include "boot.h"
#include "offsets.h" #include "voffset.h"
#include "zoffset.h"
BOOTSEG = 0x07C0 /* original address of boot-sector */ BOOTSEG = 0x07C0 /* original address of boot-sector */
SYSSEG = 0x1000 /* historical load address >> 4 */ SYSSEG = 0x1000 /* historical load address >> 4 */
...@@ -115,7 +116,7 @@ _start: ...@@ -115,7 +116,7 @@ _start:
# Part 2 of the header, from the old setup.S # Part 2 of the header, from the old setup.S
.ascii "HdrS" # header signature .ascii "HdrS" # header signature
.word 0x0209 # header version number (>= 0x0105) .word 0x020a # header version number (>= 0x0105)
# or else old loadlin-1.5 will fail) # or else old loadlin-1.5 will fail)
.globl realmode_swtch .globl realmode_swtch
realmode_swtch: .word 0, 0 # default_switch, SETUPSEG realmode_swtch: .word 0, 0 # default_switch, SETUPSEG
...@@ -168,7 +169,11 @@ heap_end_ptr: .word _end+STACK_SIZE-512 ...@@ -168,7 +169,11 @@ heap_end_ptr: .word _end+STACK_SIZE-512
# end of setup code can be used by setup # end of setup code can be used by setup
# for local heap purposes. # for local heap purposes.
pad1: .word 0 ext_loader_ver:
.byte 0 # Extended boot loader version
ext_loader_type:
.byte 0 # Extended boot loader type
cmd_line_ptr: .long 0 # (Header version 0x0202 or later) cmd_line_ptr: .long 0 # (Header version 0x0202 or later)
# If nonzero, a 32-bit pointer # If nonzero, a 32-bit pointer
# to the kernel command line. # to the kernel command line.
...@@ -200,7 +205,7 @@ relocatable_kernel: .byte 1 ...@@ -200,7 +205,7 @@ relocatable_kernel: .byte 1
#else #else
relocatable_kernel: .byte 0 relocatable_kernel: .byte 0
#endif #endif
pad2: .byte 0 min_alignment: .byte MIN_KERNEL_ALIGN_LG2 # minimum alignment
pad3: .word 0 pad3: .word 0
cmdline_size: .long COMMAND_LINE_SIZE-1 #length of the command line, cmdline_size: .long COMMAND_LINE_SIZE-1 #length of the command line,
...@@ -212,13 +217,24 @@ hardware_subarch: .long 0 # subarchitecture, added with 2.07 ...@@ -212,13 +217,24 @@ hardware_subarch: .long 0 # subarchitecture, added with 2.07
hardware_subarch_data: .quad 0 hardware_subarch_data: .quad 0
payload_offset: .long input_data payload_offset: .long ZO_input_data
payload_length: .long input_data_end-input_data payload_length: .long ZO_z_input_len
setup_data: .quad 0 # 64-bit physical pointer to setup_data: .quad 0 # 64-bit physical pointer to
# single linked list of # single linked list of
# struct setup_data # struct setup_data
pref_address: .quad LOAD_PHYSICAL_ADDR # preferred load addr
#define ZO_INIT_SIZE (ZO__end - ZO_startup_32 + ZO_z_extract_offset)
#define VO_INIT_SIZE (VO__end - VO__text)
#if ZO_INIT_SIZE > VO_INIT_SIZE
#define INIT_SIZE ZO_INIT_SIZE
#else
#define INIT_SIZE VO_INIT_SIZE
#endif
init_size: .long INIT_SIZE # kernel initialization size
# End of setup header ##################################################### # End of setup header #####################################################
.section ".inittext", "ax" .section ".inittext", "ax"
......
This diff is collapsed.
This diff is collapsed.
...@@ -8,11 +8,26 @@ ...@@ -8,11 +8,26 @@
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <asm/page_types.h>
/* Physical address where kernel should be loaded. */ /* Physical address where kernel should be loaded. */
#define LOAD_PHYSICAL_ADDR ((CONFIG_PHYSICAL_START \ #define LOAD_PHYSICAL_ADDR ((CONFIG_PHYSICAL_START \
+ (CONFIG_PHYSICAL_ALIGN - 1)) \ + (CONFIG_PHYSICAL_ALIGN - 1)) \
& ~(CONFIG_PHYSICAL_ALIGN - 1)) & ~(CONFIG_PHYSICAL_ALIGN - 1))
/* Minimum kernel alignment, as a power of two */
#ifdef CONFIG_x86_64
#define MIN_KERNEL_ALIGN_LG2 PMD_SHIFT
#else
#define MIN_KERNEL_ALIGN_LG2 (PAGE_SHIFT+1)
#endif
#define MIN_KERNEL_ALIGN (_AC(1, UL) << MIN_KERNEL_ALIGN_LG2)
#if (CONFIG_PHYSICAL_ALIGN & (CONFIG_PHYSICAL_ALIGN-1)) || \
(CONFIG_PHYSICAL_ALIGN < (_AC(1, UL) << MIN_KERNEL_ALIGN_LG2))
#error "Invalid value for CONFIG_PHYSICAL_ALIGN"
#endif
#ifdef CONFIG_KERNEL_BZIP2 #ifdef CONFIG_KERNEL_BZIP2
#define BOOT_HEAP_SIZE 0x400000 #define BOOT_HEAP_SIZE 0x400000
#else /* !CONFIG_KERNEL_BZIP2 */ #else /* !CONFIG_KERNEL_BZIP2 */
......
...@@ -50,7 +50,8 @@ struct setup_header { ...@@ -50,7 +50,8 @@ struct setup_header {
__u32 ramdisk_size; __u32 ramdisk_size;
__u32 bootsect_kludge; __u32 bootsect_kludge;
__u16 heap_end_ptr; __u16 heap_end_ptr;
__u16 _pad1; __u8 ext_loader_ver;
__u8 ext_loader_type;
__u32 cmd_line_ptr; __u32 cmd_line_ptr;
__u32 initrd_addr_max; __u32 initrd_addr_max;
__u32 kernel_alignment; __u32 kernel_alignment;
......
...@@ -32,17 +32,9 @@ ...@@ -32,17 +32,9 @@
*/ */
#define __PAGE_OFFSET _AC(0xffff880000000000, UL) #define __PAGE_OFFSET _AC(0xffff880000000000, UL)
#define __PHYSICAL_START CONFIG_PHYSICAL_START #define __PHYSICAL_START ((CONFIG_PHYSICAL_START + \
#define __KERNEL_ALIGN 0x200000 (CONFIG_PHYSICAL_ALIGN - 1)) & \
~(CONFIG_PHYSICAL_ALIGN - 1))
/*
* Make sure kernel is aligned to 2MB address. Catching it at compile
* time is better. Change your config file and compile the kernel
* for a 2MB aligned address (CONFIG_PHYSICAL_START)
*/
#if (CONFIG_PHYSICAL_START % __KERNEL_ALIGN) != 0
#error "CONFIG_PHYSICAL_START must be a multiple of 2MB"
#endif
#define __START_KERNEL (__START_KERNEL_map + __PHYSICAL_START) #define __START_KERNEL (__START_KERNEL_map + __PHYSICAL_START)
#define __START_KERNEL_map _AC(0xffffffff80000000, UL) #define __START_KERNEL_map _AC(0xffffffff80000000, UL)
......
...@@ -815,6 +815,7 @@ extern unsigned int BIOS_revision; ...@@ -815,6 +815,7 @@ extern unsigned int BIOS_revision;
/* Boot loader type from the setup header: */ /* Boot loader type from the setup header: */
extern int bootloader_type; extern int bootloader_type;
extern int bootloader_version;
extern char ignore_fpu_irq; extern char ignore_fpu_irq;
......
...@@ -146,4 +146,5 @@ void foo(void) ...@@ -146,4 +146,5 @@ void foo(void)
OFFSET(BP_loadflags, boot_params, hdr.loadflags); OFFSET(BP_loadflags, boot_params, hdr.loadflags);
OFFSET(BP_hardware_subarch, boot_params, hdr.hardware_subarch); OFFSET(BP_hardware_subarch, boot_params, hdr.hardware_subarch);
OFFSET(BP_version, boot_params, hdr.version); OFFSET(BP_version, boot_params, hdr.version);
OFFSET(BP_kernel_alignment, boot_params, hdr.kernel_alignment);
} }
...@@ -125,6 +125,7 @@ int main(void) ...@@ -125,6 +125,7 @@ int main(void)
OFFSET(BP_loadflags, boot_params, hdr.loadflags); OFFSET(BP_loadflags, boot_params, hdr.loadflags);
OFFSET(BP_hardware_subarch, boot_params, hdr.hardware_subarch); OFFSET(BP_hardware_subarch, boot_params, hdr.hardware_subarch);
OFFSET(BP_version, boot_params, hdr.version); OFFSET(BP_version, boot_params, hdr.version);
OFFSET(BP_kernel_alignment, boot_params, hdr.kernel_alignment);
BLANK(); BLANK();
DEFINE(PAGE_SIZE_asm, PAGE_SIZE); DEFINE(PAGE_SIZE_asm, PAGE_SIZE);
......
...@@ -608,13 +608,6 @@ ignore_int: ...@@ -608,13 +608,6 @@ ignore_int:
ENTRY(initial_code) ENTRY(initial_code)
.long i386_start_kernel .long i386_start_kernel
.section .text
/*
* Real beginning of normal "text" segment
*/
ENTRY(stext)
ENTRY(_stext)
/* /*
* BSS section * BSS section
*/ */
......
...@@ -214,8 +214,8 @@ unsigned long mmu_cr4_features; ...@@ -214,8 +214,8 @@ unsigned long mmu_cr4_features;
unsigned long mmu_cr4_features = X86_CR4_PAE; unsigned long mmu_cr4_features = X86_CR4_PAE;
#endif #endif
/* Boot loader ID as an integer, for the benefit of proc_dointvec */ /* Boot loader ID and version as integers, for the benefit of proc_dointvec */
int bootloader_type; int bootloader_type, bootloader_version;
/* /*
* Setup options * Setup options
...@@ -706,6 +706,12 @@ void __init setup_arch(char **cmdline_p) ...@@ -706,6 +706,12 @@ void __init setup_arch(char **cmdline_p)
#endif #endif
saved_video_mode = boot_params.hdr.vid_mode; saved_video_mode = boot_params.hdr.vid_mode;
bootloader_type = boot_params.hdr.type_of_loader; bootloader_type = boot_params.hdr.type_of_loader;
if ((bootloader_type >> 4) == 0xe) {
bootloader_type &= 0xf;
bootloader_type |= (boot_params.hdr.ext_loader_type+0x10) << 4;
}
bootloader_version = bootloader_type & 0xf;
bootloader_version |= boot_params.hdr.ext_loader_ver << 4;
#ifdef CONFIG_BLK_DEV_RAM #ifdef CONFIG_BLK_DEV_RAM
rd_image_start = boot_params.hdr.ram_size & RAMDISK_IMAGE_START_MASK; rd_image_start = boot_params.hdr.ram_size & RAMDISK_IMAGE_START_MASK;
......
/*
* ld script for the x86 kernel
*
* Historic 32-bit version written by Martin Mares <mj@atrey.karlin.mff.cuni.cz>
*
* Modernisation, unification and other changes and fixes:
* Copyright (C) 2007-2009 Sam Ravnborg <sam@ravnborg.org>
*
*
* Don't define absolute symbols until and unless you know that symbol
* value is should remain constant even if kernel image is relocated
* at run time. Absolute symbols are not relocated. If symbol value should
* change if kernel is relocated, make the symbol section relative and
* put it inside the section definition.
*/
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
# include "vmlinux_32.lds.S" #define LOAD_OFFSET __PAGE_OFFSET
#else #else
# include "vmlinux_64.lds.S" #define LOAD_OFFSET __START_KERNEL_map
#endif #endif
#include <asm-generic/vmlinux.lds.h>
#include <asm/asm-offsets.h>
#include <asm/thread_info.h>
#include <asm/page_types.h>
#include <asm/cache.h>
#include <asm/boot.h>
#undef i386 /* in case the preprocessor is a 32bit one */
OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT, CONFIG_OUTPUT_FORMAT, CONFIG_OUTPUT_FORMAT)
#ifdef CONFIG_X86_32
OUTPUT_ARCH(i386)
ENTRY(phys_startup_32)
jiffies = jiffies_64;
#else
OUTPUT_ARCH(i386:x86-64)
ENTRY(phys_startup_64)
jiffies_64 = jiffies;
#endif
PHDRS {
text PT_LOAD FLAGS(5); /* R_E */
data PT_LOAD FLAGS(7); /* RWE */
#ifdef CONFIG_X86_64
user PT_LOAD FLAGS(7); /* RWE */
data.init PT_LOAD FLAGS(7); /* RWE */
#ifdef CONFIG_SMP
percpu PT_LOAD FLAGS(7); /* RWE */
#endif
data.init2 PT_LOAD FLAGS(7); /* RWE */
#endif
note PT_NOTE FLAGS(0); /* ___ */
}
SECTIONS
{
#ifdef CONFIG_X86_32
. = LOAD_OFFSET + LOAD_PHYSICAL_ADDR;
phys_startup_32 = startup_32 - LOAD_OFFSET;
#else
. = __START_KERNEL;
phys_startup_64 = startup_64 - LOAD_OFFSET;
#endif
/* Text and read-only data */
/* bootstrapping code */
.text.head : AT(ADDR(.text.head) - LOAD_OFFSET) {
_text = .;
*(.text.head)
} :text = 0x9090
/* The rest of the text */
.text : AT(ADDR(.text) - LOAD_OFFSET) {
#ifdef CONFIG_X86_32
/* not really needed, already page aligned */
. = ALIGN(PAGE_SIZE);
*(.text.page_aligned)
#endif
. = ALIGN(8);
_stext = .;
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
KPROBES_TEXT
IRQENTRY_TEXT
*(.fixup)
*(.gnu.warning)
/* End of text section */
_etext = .;
} :text = 0x9090
NOTES :text :note
/* Exception table */
. = ALIGN(16);
__ex_table : AT(ADDR(__ex_table) - LOAD_OFFSET) {
__start___ex_table = .;
*(__ex_table)
__stop___ex_table = .;
} :text = 0x9090
RODATA
/* Data */
. = ALIGN(PAGE_SIZE);
.data : AT(ADDR(.data) - LOAD_OFFSET) {
DATA_DATA
CONSTRUCTORS
#ifdef CONFIG_X86_64
/* End of data section */
_edata = .;
#endif
} :data
#ifdef CONFIG_X86_32
/* 32 bit has nosave before _edata */
. = ALIGN(PAGE_SIZE);
.data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) {
__nosave_begin = .;
*(.data.nosave)
. = ALIGN(PAGE_SIZE);
__nosave_end = .;
}
#endif
. = ALIGN(PAGE_SIZE);
.data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
*(.data.page_aligned)
*(.data.idt)
}
#ifdef CONFIG_X86_32
. = ALIGN(32);
#else
. = ALIGN(PAGE_SIZE);
. = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
#endif
.data.cacheline_aligned :
AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) {
*(.data.cacheline_aligned)
}
/* rarely changed data like cpu maps */
#ifdef CONFIG_X86_32
. = ALIGN(32);
#else
. = ALIGN(CONFIG_X86_INTERNODE_CACHE_BYTES);
#endif
.data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) {
*(.data.read_mostly)
#ifdef CONFIG_X86_32
/* End of data section */
_edata = .;
#endif
}
#ifdef CONFIG_X86_64
#define VSYSCALL_ADDR (-10*1024*1024)
#define VSYSCALL_PHYS_ADDR ((LOADADDR(.data.read_mostly) + \
SIZEOF(.data.read_mostly) + 4095) & ~(4095))
#define VSYSCALL_VIRT_ADDR ((ADDR(.data.read_mostly) + \
SIZEOF(.data.read_mostly) + 4095) & ~(4095))
#define VLOAD_OFFSET (VSYSCALL_ADDR - VSYSCALL_PHYS_ADDR)
#define VLOAD(x) (ADDR(x) - VLOAD_OFFSET)
#define VVIRT_OFFSET (VSYSCALL_ADDR - VSYSCALL_VIRT_ADDR)
#define VVIRT(x) (ADDR(x) - VVIRT_OFFSET)
. = VSYSCALL_ADDR;
.vsyscall_0 : AT(VSYSCALL_PHYS_ADDR) {
*(.vsyscall_0)
} :user
__vsyscall_0 = VSYSCALL_VIRT_ADDR;
. = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
.vsyscall_fn : AT(VLOAD(.vsyscall_fn)) {
*(.vsyscall_fn)
}
. = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
.vsyscall_gtod_data : AT(VLOAD(.vsyscall_gtod_data)) {
*(.vsyscall_gtod_data)
}
vsyscall_gtod_data = VVIRT(.vsyscall_gtod_data);
.vsyscall_clock : AT(VLOAD(.vsyscall_clock)) {
*(.vsyscall_clock)
}
vsyscall_clock = VVIRT(.vsyscall_clock);
.vsyscall_1 ADDR(.vsyscall_0) + 1024: AT(VLOAD(.vsyscall_1)) {
*(.vsyscall_1)
}
.vsyscall_2 ADDR(.vsyscall_0) + 2048: AT(VLOAD(.vsyscall_2)) {
*(.vsyscall_2)
}
.vgetcpu_mode : AT(VLOAD(.vgetcpu_mode)) {
*(.vgetcpu_mode)
}
vgetcpu_mode = VVIRT(.vgetcpu_mode);
. = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
.jiffies : AT(VLOAD(.jiffies)) {
*(.jiffies)
}
jiffies = VVIRT(.jiffies);
.vsyscall_3 ADDR(.vsyscall_0) + 3072: AT(VLOAD(.vsyscall_3)) {
*(.vsyscall_3)
}
. = VSYSCALL_VIRT_ADDR + PAGE_SIZE;
#undef VSYSCALL_ADDR
#undef VSYSCALL_PHYS_ADDR
#undef VSYSCALL_VIRT_ADDR
#undef VLOAD_OFFSET
#undef VLOAD
#undef VVIRT_OFFSET
#undef VVIRT
#endif /* CONFIG_X86_64 */
/* init_task */
. = ALIGN(THREAD_SIZE);
.data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) {
*(.data.init_task)
}
#ifdef CONFIG_X86_64
:data.init
#endif
/*
* smp_locks might be freed after init
* start/end must be page aligned
*/
. = ALIGN(PAGE_SIZE);
.smp_locks : AT(ADDR(.smp_locks) - LOAD_OFFSET) {
__smp_locks = .;
*(.smp_locks)
__smp_locks_end = .;
. = ALIGN(PAGE_SIZE);
}
/* Init code and data - will be freed after init */
. = ALIGN(PAGE_SIZE);
.init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
__init_begin = .; /* paired with __init_end */
_sinittext = .;
INIT_TEXT
_einittext = .;
}
.init.data : AT(ADDR(.init.data) - LOAD_OFFSET) {
INIT_DATA
}
. = ALIGN(16);
.init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) {
__setup_start = .;
*(.init.setup)
__setup_end = .;
}
.initcall.init : AT(ADDR(.initcall.init) - LOAD_OFFSET) {
__initcall_start = .;
INITCALLS
__initcall_end = .;
}
.con_initcall.init : AT(ADDR(.con_initcall.init) - LOAD_OFFSET) {
__con_initcall_start = .;
*(.con_initcall.init)
__con_initcall_end = .;
}
.x86_cpu_dev.init : AT(ADDR(.x86_cpu_dev.init) - LOAD_OFFSET) {
__x86_cpu_dev_start = .;
*(.x86_cpu_dev.init)
__x86_cpu_dev_end = .;
}
SECURITY_INIT
. = ALIGN(8);
.parainstructions : AT(ADDR(.parainstructions) - LOAD_OFFSET) {
__parainstructions = .;
*(.parainstructions)
__parainstructions_end = .;
}
. = ALIGN(8);
.altinstructions : AT(ADDR(.altinstructions) - LOAD_OFFSET) {
__alt_instructions = .;
*(.altinstructions)
__alt_instructions_end = .;
}
.altinstr_replacement : AT(ADDR(.altinstr_replacement) - LOAD_OFFSET) {
*(.altinstr_replacement)
}
/*
* .exit.text is discard at runtime, not link time, to deal with
* references from .altinstructions and .eh_frame
*/
.exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) {
EXIT_TEXT
}
.exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) {
EXIT_DATA
}
#ifdef CONFIG_BLK_DEV_INITRD
. = ALIGN(PAGE_SIZE);
.init.ramfs : AT(ADDR(.init.ramfs) - LOAD_OFFSET) {
__initramfs_start = .;
*(.init.ramfs)
__initramfs_end = .;
}
#endif
#if defined(CONFIG_X86_64) && defined(CONFIG_SMP)
/*
* percpu offsets are zero-based on SMP. PERCPU_VADDR() changes the
* output PHDR, so the next output section - __data_nosave - should
* start another section data.init2. Also, pda should be at the head of
* percpu area. Preallocate it and define the percpu offset symbol
* so that it can be accessed as a percpu variable.
*/
. = ALIGN(PAGE_SIZE);
PERCPU_VADDR(0, :percpu)
#else
PERCPU(PAGE_SIZE)
#endif
. = ALIGN(PAGE_SIZE);
/* freed after init ends here */
.init.end : AT(ADDR(.init.end) - LOAD_OFFSET) {
__init_end = .;
}
#ifdef CONFIG_X86_64
.data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) {
. = ALIGN(PAGE_SIZE);
__nosave_begin = .;
*(.data.nosave)
. = ALIGN(PAGE_SIZE);
__nosave_end = .;
} :data.init2
/* use another section data.init2, see PERCPU_VADDR() above */
#endif
/* BSS */
. = ALIGN(PAGE_SIZE);
.bss : AT(ADDR(.bss) - LOAD_OFFSET) {
__bss_start = .;
*(.bss.page_aligned)
*(.bss)
. = ALIGN(4);
__bss_stop = .;
}
. = ALIGN(PAGE_SIZE);
.brk : AT(ADDR(.brk) - LOAD_OFFSET) {
__brk_base = .;
. += 64 * 1024; /* 64k alignment slop space */
*(.brk_reservation) /* areas brk users have reserved */
__brk_limit = .;
}
.end : AT(ADDR(.end) - LOAD_OFFSET) {
_end = .;
}
/* Sections to be discarded */
/DISCARD/ : {
*(.exitcall.exit)
*(.eh_frame)
*(.discard)
}
STABS_DEBUG
DWARF_DEBUG
}
#ifdef CONFIG_X86_32
ASSERT((_end - LOAD_OFFSET <= KERNEL_IMAGE_SIZE),
"kernel image bigger than KERNEL_IMAGE_SIZE")
#else
/*
* Per-cpu symbols which need to be offset from __per_cpu_load
* for the boot processor.
*/
#define INIT_PER_CPU(x) init_per_cpu__##x = per_cpu__##x + __per_cpu_load
INIT_PER_CPU(gdt_page);
INIT_PER_CPU(irq_stack_union);
/*
* Build-time check on the image size:
*/
ASSERT((_end - _text <= KERNEL_IMAGE_SIZE),
"kernel image bigger than KERNEL_IMAGE_SIZE")
#ifdef CONFIG_SMP
ASSERT((per_cpu__irq_stack_union == 0),
"irq_stack_union is not at start of per-cpu area");
#endif
#endif /* CONFIG_X86_32 */
#ifdef CONFIG_KEXEC
#include <asm/kexec.h>
ASSERT(kexec_control_code_size <= KEXEC_CONTROL_CODE_MAX_SIZE,
"kexec control code size is too big")
#endif
/* ld script to make i386 Linux kernel
* Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz>;
*
* Don't define absolute symbols until and unless you know that symbol
* value is should remain constant even if kernel image is relocated
* at run time. Absolute symbols are not relocated. If symbol value should
* change if kernel is relocated, make the symbol section relative and
* put it inside the section definition.
*/
#define LOAD_OFFSET __PAGE_OFFSET
#include <asm-generic/vmlinux.lds.h>
#include <asm/thread_info.h>
#include <asm/page_types.h>
#include <asm/cache.h>
#include <asm/boot.h>
OUTPUT_FORMAT("elf32-i386", "elf32-i386", "elf32-i386")
OUTPUT_ARCH(i386)
ENTRY(phys_startup_32)
jiffies = jiffies_64;
PHDRS {
text PT_LOAD FLAGS(5); /* R_E */
data PT_LOAD FLAGS(7); /* RWE */
note PT_NOTE FLAGS(0); /* ___ */
}
SECTIONS
{
. = LOAD_OFFSET + LOAD_PHYSICAL_ADDR;
phys_startup_32 = startup_32 - LOAD_OFFSET;
.text.head : AT(ADDR(.text.head) - LOAD_OFFSET) {
_text = .; /* Text and read-only data */
*(.text.head)
} :text = 0x9090
/* read-only */
.text : AT(ADDR(.text) - LOAD_OFFSET) {
. = ALIGN(PAGE_SIZE); /* not really needed, already page aligned */
*(.text.page_aligned)
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
KPROBES_TEXT
IRQENTRY_TEXT
*(.fixup)
*(.gnu.warning)
_etext = .; /* End of text section */
} :text = 0x9090
NOTES :text :note
. = ALIGN(16); /* Exception table */
__ex_table : AT(ADDR(__ex_table) - LOAD_OFFSET) {
__start___ex_table = .;
*(__ex_table)
__stop___ex_table = .;
} :text = 0x9090
RODATA
/* writeable */
. = ALIGN(PAGE_SIZE);
.data : AT(ADDR(.data) - LOAD_OFFSET) { /* Data */
DATA_DATA
CONSTRUCTORS
} :data
. = ALIGN(PAGE_SIZE);
.data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) {
__nosave_begin = .;
*(.data.nosave)
. = ALIGN(PAGE_SIZE);
__nosave_end = .;
}
. = ALIGN(PAGE_SIZE);
.data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
*(.data.page_aligned)
*(.data.idt)
}
. = ALIGN(32);
.data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) {
*(.data.cacheline_aligned)
}
/* rarely changed data like cpu maps */
. = ALIGN(32);
.data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) {
*(.data.read_mostly)
_edata = .; /* End of data section */
}
. = ALIGN(THREAD_SIZE); /* init_task */
.data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) {
*(.data.init_task)
}
/* might get freed after init */
. = ALIGN(PAGE_SIZE);
.smp_locks : AT(ADDR(.smp_locks) - LOAD_OFFSET) {
__smp_locks = .;
*(.smp_locks)
__smp_locks_end = .;
}
/* will be freed after init
* Following ALIGN() is required to make sure no other data falls on the
* same page where __smp_alt_end is pointing as that page might be freed
* after boot. Always make sure that ALIGN() directive is present after
* the section which contains __smp_alt_end.
*/
. = ALIGN(PAGE_SIZE);
/* will be freed after init */
. = ALIGN(PAGE_SIZE); /* Init code and data */
.init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
__init_begin = .;
_sinittext = .;
INIT_TEXT
_einittext = .;
}
.init.data : AT(ADDR(.init.data) - LOAD_OFFSET) {
INIT_DATA
}
. = ALIGN(16);
.init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) {
__setup_start = .;
*(.init.setup)
__setup_end = .;
}
.initcall.init : AT(ADDR(.initcall.init) - LOAD_OFFSET) {
__initcall_start = .;
INITCALLS
__initcall_end = .;
}
.con_initcall.init : AT(ADDR(.con_initcall.init) - LOAD_OFFSET) {
__con_initcall_start = .;
*(.con_initcall.init)
__con_initcall_end = .;
}
.x86_cpu_dev.init : AT(ADDR(.x86_cpu_dev.init) - LOAD_OFFSET) {
__x86_cpu_dev_start = .;
*(.x86_cpu_dev.init)
__x86_cpu_dev_end = .;
}
SECURITY_INIT
. = ALIGN(4);
.altinstructions : AT(ADDR(.altinstructions) - LOAD_OFFSET) {
__alt_instructions = .;
*(.altinstructions)
__alt_instructions_end = .;
}
.altinstr_replacement : AT(ADDR(.altinstr_replacement) - LOAD_OFFSET) {
*(.altinstr_replacement)
}
. = ALIGN(4);
.parainstructions : AT(ADDR(.parainstructions) - LOAD_OFFSET) {
__parainstructions = .;
*(.parainstructions)
__parainstructions_end = .;
}
/* .exit.text is discard at runtime, not link time, to deal with references
from .altinstructions and .eh_frame */
.exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) {
EXIT_TEXT
}
.exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) {
EXIT_DATA
}
#if defined(CONFIG_BLK_DEV_INITRD)
. = ALIGN(PAGE_SIZE);
.init.ramfs : AT(ADDR(.init.ramfs) - LOAD_OFFSET) {
__initramfs_start = .;
*(.init.ramfs)
__initramfs_end = .;
}
#endif
PERCPU(PAGE_SIZE)
. = ALIGN(PAGE_SIZE);
/* freed after init ends here */
.bss : AT(ADDR(.bss) - LOAD_OFFSET) {
__init_end = .;
__bss_start = .; /* BSS */
*(.bss.page_aligned)
*(.bss)
. = ALIGN(4);
__bss_stop = .;
}
.brk : AT(ADDR(.brk) - LOAD_OFFSET) {
. = ALIGN(PAGE_SIZE);
__brk_base = . ;
. += 64 * 1024 ; /* 64k alignment slop space */
*(.brk_reservation) /* areas brk users have reserved */
__brk_limit = . ;
}
.end : AT(ADDR(.end) - LOAD_OFFSET) {
_end = . ;
}
/* Sections to be discarded */
/DISCARD/ : {
*(.exitcall.exit)
*(.discard)
}
STABS_DEBUG
DWARF_DEBUG
}
/*
* Build-time check on the image size:
*/
ASSERT((_end - LOAD_OFFSET <= KERNEL_IMAGE_SIZE),
"kernel image bigger than KERNEL_IMAGE_SIZE")
#ifdef CONFIG_KEXEC
/* Link time checks */
#include <asm/kexec.h>
ASSERT(kexec_control_code_size <= KEXEC_CONTROL_CODE_MAX_SIZE,
"kexec control code size is too big")
#endif
/* ld script to make x86-64 Linux kernel
* Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz>;
*/
#define LOAD_OFFSET __START_KERNEL_map
#include <asm-generic/vmlinux.lds.h>
#include <asm/asm-offsets.h>
#include <asm/page_types.h>
#undef i386 /* in case the preprocessor is a 32bit one */
OUTPUT_FORMAT("elf64-x86-64", "elf64-x86-64", "elf64-x86-64")
OUTPUT_ARCH(i386:x86-64)
ENTRY(phys_startup_64)
jiffies_64 = jiffies;
PHDRS {
text PT_LOAD FLAGS(5); /* R_E */
data PT_LOAD FLAGS(7); /* RWE */
user PT_LOAD FLAGS(7); /* RWE */
data.init PT_LOAD FLAGS(7); /* RWE */
#ifdef CONFIG_SMP
percpu PT_LOAD FLAGS(7); /* RWE */
#endif
data.init2 PT_LOAD FLAGS(7); /* RWE */
note PT_NOTE FLAGS(0); /* ___ */
}
SECTIONS
{
. = __START_KERNEL;
phys_startup_64 = startup_64 - LOAD_OFFSET;
.text : AT(ADDR(.text) - LOAD_OFFSET) {
_text = .; /* Text and read-only data */
/* First the code that has to be first for bootstrapping */
*(.text.head)
_stext = .;
/* Then the rest */
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
KPROBES_TEXT
IRQENTRY_TEXT
*(.fixup)
*(.gnu.warning)
_etext = .; /* End of text section */
} :text = 0x9090
NOTES :text :note
. = ALIGN(16); /* Exception table */
__ex_table : AT(ADDR(__ex_table) - LOAD_OFFSET) {
__start___ex_table = .;
*(__ex_table)
__stop___ex_table = .;
} :text = 0x9090
RODATA
. = ALIGN(PAGE_SIZE); /* Align data segment to page size boundary */
/* Data */
.data : AT(ADDR(.data) - LOAD_OFFSET) {
DATA_DATA
CONSTRUCTORS
_edata = .; /* End of data section */
} :data
.data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) {
. = ALIGN(PAGE_SIZE);
. = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
*(.data.cacheline_aligned)
}
. = ALIGN(CONFIG_X86_INTERNODE_CACHE_BYTES);
.data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) {
*(.data.read_mostly)
}
#define VSYSCALL_ADDR (-10*1024*1024)
#define VSYSCALL_PHYS_ADDR ((LOADADDR(.data.read_mostly) + SIZEOF(.data.read_mostly) + 4095) & ~(4095))
#define VSYSCALL_VIRT_ADDR ((ADDR(.data.read_mostly) + SIZEOF(.data.read_mostly) + 4095) & ~(4095))
#define VLOAD_OFFSET (VSYSCALL_ADDR - VSYSCALL_PHYS_ADDR)
#define VLOAD(x) (ADDR(x) - VLOAD_OFFSET)
#define VVIRT_OFFSET (VSYSCALL_ADDR - VSYSCALL_VIRT_ADDR)
#define VVIRT(x) (ADDR(x) - VVIRT_OFFSET)
. = VSYSCALL_ADDR;
.vsyscall_0 : AT(VSYSCALL_PHYS_ADDR) { *(.vsyscall_0) } :user
__vsyscall_0 = VSYSCALL_VIRT_ADDR;
. = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
.vsyscall_fn : AT(VLOAD(.vsyscall_fn)) { *(.vsyscall_fn) }
. = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
.vsyscall_gtod_data : AT(VLOAD(.vsyscall_gtod_data))
{ *(.vsyscall_gtod_data) }
vsyscall_gtod_data = VVIRT(.vsyscall_gtod_data);
.vsyscall_clock : AT(VLOAD(.vsyscall_clock))
{ *(.vsyscall_clock) }
vsyscall_clock = VVIRT(.vsyscall_clock);
.vsyscall_1 ADDR(.vsyscall_0) + 1024: AT(VLOAD(.vsyscall_1))
{ *(.vsyscall_1) }
.vsyscall_2 ADDR(.vsyscall_0) + 2048: AT(VLOAD(.vsyscall_2))
{ *(.vsyscall_2) }
.vgetcpu_mode : AT(VLOAD(.vgetcpu_mode)) { *(.vgetcpu_mode) }
vgetcpu_mode = VVIRT(.vgetcpu_mode);
. = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
.jiffies : AT(VLOAD(.jiffies)) { *(.jiffies) }
jiffies = VVIRT(.jiffies);
.vsyscall_3 ADDR(.vsyscall_0) + 3072: AT(VLOAD(.vsyscall_3))
{ *(.vsyscall_3) }
. = VSYSCALL_VIRT_ADDR + PAGE_SIZE;
#undef VSYSCALL_ADDR
#undef VSYSCALL_PHYS_ADDR
#undef VSYSCALL_VIRT_ADDR
#undef VLOAD_OFFSET
#undef VLOAD
#undef VVIRT_OFFSET
#undef VVIRT
.data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) {
. = ALIGN(THREAD_SIZE); /* init_task */
*(.data.init_task)
}:data.init
.data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
. = ALIGN(PAGE_SIZE);
*(.data.page_aligned)
}
.smp_locks : AT(ADDR(.smp_locks) - LOAD_OFFSET) {
/* might get freed after init */
. = ALIGN(PAGE_SIZE);
__smp_alt_begin = .;
__smp_locks = .;
*(.smp_locks)
__smp_locks_end = .;
. = ALIGN(PAGE_SIZE);
__smp_alt_end = .;
}
. = ALIGN(PAGE_SIZE); /* Init code and data */
__init_begin = .; /* paired with __init_end */
.init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
_sinittext = .;
INIT_TEXT
_einittext = .;
}
.init.data : AT(ADDR(.init.data) - LOAD_OFFSET) {
__initdata_begin = .;
INIT_DATA
__initdata_end = .;
}
.init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) {
. = ALIGN(16);
__setup_start = .;
*(.init.setup)
__setup_end = .;
}
.initcall.init : AT(ADDR(.initcall.init) - LOAD_OFFSET) {
__initcall_start = .;
INITCALLS
__initcall_end = .;
}
.con_initcall.init : AT(ADDR(.con_initcall.init) - LOAD_OFFSET) {
__con_initcall_start = .;
*(.con_initcall.init)
__con_initcall_end = .;
}
.x86_cpu_dev.init : AT(ADDR(.x86_cpu_dev.init) - LOAD_OFFSET) {
__x86_cpu_dev_start = .;
*(.x86_cpu_dev.init)
__x86_cpu_dev_end = .;
}
SECURITY_INIT
. = ALIGN(8);
.parainstructions : AT(ADDR(.parainstructions) - LOAD_OFFSET) {
__parainstructions = .;
*(.parainstructions)
__parainstructions_end = .;
}
.altinstructions : AT(ADDR(.altinstructions) - LOAD_OFFSET) {
. = ALIGN(8);
__alt_instructions = .;
*(.altinstructions)
__alt_instructions_end = .;
}
.altinstr_replacement : AT(ADDR(.altinstr_replacement) - LOAD_OFFSET) {
*(.altinstr_replacement)
}
/* .exit.text is discard at runtime, not link time, to deal with references
from .altinstructions and .eh_frame */
.exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) {
EXIT_TEXT
}
.exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) {
EXIT_DATA
}
#ifdef CONFIG_BLK_DEV_INITRD
. = ALIGN(PAGE_SIZE);
.init.ramfs : AT(ADDR(.init.ramfs) - LOAD_OFFSET) {
__initramfs_start = .;
*(.init.ramfs)
__initramfs_end = .;
}
#endif
#ifdef CONFIG_SMP
/*
* percpu offsets are zero-based on SMP. PERCPU_VADDR() changes the
* output PHDR, so the next output section - __data_nosave - should
* start another section data.init2. Also, pda should be at the head of
* percpu area. Preallocate it and define the percpu offset symbol
* so that it can be accessed as a percpu variable.
*/
. = ALIGN(PAGE_SIZE);
PERCPU_VADDR(0, :percpu)
#else
PERCPU(PAGE_SIZE)
#endif
. = ALIGN(PAGE_SIZE);
__init_end = .;
.data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) {
. = ALIGN(PAGE_SIZE);
__nosave_begin = .;
*(.data.nosave)
. = ALIGN(PAGE_SIZE);
__nosave_end = .;
} :data.init2 /* use another section data.init2, see PERCPU_VADDR() above */
.bss : AT(ADDR(.bss) - LOAD_OFFSET) {
. = ALIGN(PAGE_SIZE);
__bss_start = .; /* BSS */
*(.bss.page_aligned)
*(.bss)
__bss_stop = .;
}
.brk : AT(ADDR(.brk) - LOAD_OFFSET) {
. = ALIGN(PAGE_SIZE);
__brk_base = . ;
. += 64 * 1024 ; /* 64k alignment slop space */
*(.brk_reservation) /* areas brk users have reserved */
__brk_limit = . ;
}
_end = . ;
/* Sections to be discarded */
/DISCARD/ : {
*(.exitcall.exit)
*(.eh_frame)
*(.discard)
}
STABS_DEBUG
DWARF_DEBUG
}
/*
* Per-cpu symbols which need to be offset from __per_cpu_load
* for the boot processor.
*/
#define INIT_PER_CPU(x) init_per_cpu__##x = per_cpu__##x + __per_cpu_load
INIT_PER_CPU(gdt_page);
INIT_PER_CPU(irq_stack_union);
/*
* Build-time check on the image size:
*/
ASSERT((_end - _text <= KERNEL_IMAGE_SIZE),
"kernel image bigger than KERNEL_IMAGE_SIZE")
#ifdef CONFIG_SMP
ASSERT((per_cpu__irq_stack_union == 0),
"irq_stack_union is not at start of per-cpu area");
#endif
#ifdef CONFIG_KEXEC
#include <asm/kexec.h>
ASSERT(kexec_control_code_size <= KEXEC_CONTROL_CODE_MAX_SIZE,
"kexec control code size is too big")
#endif
...@@ -729,6 +729,14 @@ static struct ctl_table kern_table[] = { ...@@ -729,6 +729,14 @@ static struct ctl_table kern_table[] = {
.mode = 0444, .mode = 0444,
.proc_handler = &proc_dointvec, .proc_handler = &proc_dointvec,
}, },
{
.ctl_name = CTL_UNNUMBERED,
.procname = "bootloader_version",
.data = &bootloader_version,
.maxlen = sizeof (int),
.mode = 0444,
.proc_handler = &proc_dointvec,
},
{ {
.ctl_name = CTL_UNNUMBERED, .ctl_name = CTL_UNNUMBERED,
.procname = "kstack_depth_to_print", .procname = "kstack_depth_to_print",
......
...@@ -188,20 +188,34 @@ cmd_objcopy = $(OBJCOPY) $(OBJCOPYFLAGS) $(OBJCOPYFLAGS_$(@F)) $< $@ ...@@ -188,20 +188,34 @@ cmd_objcopy = $(OBJCOPY) $(OBJCOPYFLAGS) $(OBJCOPYFLAGS_$(@F)) $< $@
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
quiet_cmd_gzip = GZIP $@ quiet_cmd_gzip = GZIP $@
cmd_gzip = gzip -f -9 < $< > $@ cmd_gzip = (cat $(filter-out FORCE,$^) | gzip -f -9 > $@) || \
(rm -f $@ ; false)
# Bzip2 # Bzip2
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Bzip2 does not include size in file... so we have to fake that # Bzip2 and LZMA do not include size in file... so we have to fake that;
size_append=$(CONFIG_SHELL) $(srctree)/scripts/bin_size # append the size as a 32-bit littleendian number as gzip does.
size_append = echo -ne $(shell \
quiet_cmd_bzip2 = BZIP2 $@ dec_size=0; \
cmd_bzip2 = (bzip2 -9 < $< && $(size_append) $<) > $@ || (rm -f $@ ; false) for F in $1; do \
fsize=$$(stat -c "%s" $$F); \
dec_size=$$(expr $$dec_size + $$fsize); \
done; \
printf "%08x" $$dec_size | \
sed 's/\(..\)\(..\)\(..\)\(..\)/\\\\x\4\\\\x\3\\\\x\2\\\\x\1/g' \
)
quiet_cmd_bzip2 = BZIP2 $@
cmd_bzip2 = (cat $(filter-out FORCE,$^) | \
bzip2 -9 && $(call size_append, $(filter-out FORCE,$^))) > $@ || \
(rm -f $@ ; false)
# Lzma # Lzma
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
quiet_cmd_lzma = LZMA $@ quiet_cmd_lzma = LZMA $@
cmd_lzma = (lzma -9 -c $< && $(size_append) $<) >$@ || (rm -f $@ ; false) cmd_lzma = (cat $(filter-out FORCE,$^) | \
lzma -9 && $(call size_append, $(filter-out FORCE,$^))) > $@ || \
(rm -f $@ ; false)
#!/bin/sh
if [ $# = 0 ] ; then
echo Usage: $0 file
fi
size_dec=`stat -c "%s" $1`
size_hex_echo_string=`printf "%08x" $size_dec |
sed 's/\(..\)\(..\)\(..\)\(..\)/\\\\x\4\\\\x\3\\\\x\2\\\\x\1/g'`
/bin/echo -ne $size_hex_echo_string
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment