Commit c226951b authored by Jeff Garzik's avatar Jeff Garzik

Merge branch 'master' into upstream

parents b0df3bd1 e8216dee
...@@ -1363,6 +1363,11 @@ running once the system is up. ...@@ -1363,6 +1363,11 @@ running once the system is up.
reserve= [KNL,BUGS] Force the kernel to ignore some iomem area reserve= [KNL,BUGS] Force the kernel to ignore some iomem area
reservetop= [IA-32]
Format: nn[KMG]
Reserves a hole at the top of the kernel virtual
address space.
resume= [SWSUSP] resume= [SWSUSP]
Specify the partition device for software suspend Specify the partition device for software suspend
......
DCCP protocol DCCP protocol
============ ============
Last updated: 10 November 2005
Contents Contents
======== ========
...@@ -42,8 +41,11 @@ Socket options ...@@ -42,8 +41,11 @@ Socket options
DCCP_SOCKOPT_PACKET_SIZE is used for CCID3 to set default packet size for DCCP_SOCKOPT_PACKET_SIZE is used for CCID3 to set default packet size for
calculations. calculations.
DCCP_SOCKOPT_SERVICE sets the service. This is compulsory as per the DCCP_SOCKOPT_SERVICE sets the service. The specification mandates use of
specification. If you don't set it you will get EPROTO. service codes (RFC 4340, sec. 8.1.2); if this socket option is not set,
the socket will fall back to 0 (which means that no meaningful service code
is present). Connecting sockets set at most one service option; for
listening sockets, multiple service codes can be specified.
Notes Notes
===== =====
......
...@@ -52,3 +52,18 @@ suspend image will be as small as possible. ...@@ -52,3 +52,18 @@ suspend image will be as small as possible.
Reading from this file will display the current image size limit, which Reading from this file will display the current image size limit, which
is set to 500 MB by default. is set to 500 MB by default.
/sys/power/pm_trace controls the code which saves the last PM event point in
the RTC across reboots, so that you can debug a machine that just hangs
during suspend (or more commonly, during resume). Namely, the RTC is only
used to save the last PM event point if this file contains '1'. Initially it
contains '0' which may be changed to '1' by writing a string representing a
nonzero integer into it.
To use this debugging feature you should attempt to suspend the machine, then
reboot it and run
dmesg -s 1000000 | grep 'hash matches'
CAUTION: Using it will cause your machine's real-time (CMOS) clock to be
set to a random invalid time after a resume.
...@@ -29,6 +29,7 @@ Currently, these files are in /proc/sys/vm: ...@@ -29,6 +29,7 @@ Currently, these files are in /proc/sys/vm:
- drop-caches - drop-caches
- zone_reclaim_mode - zone_reclaim_mode
- min_unmapped_ratio - min_unmapped_ratio
- min_slab_ratio
- panic_on_oom - panic_on_oom
============================================================== ==============================================================
...@@ -138,7 +139,6 @@ This is value ORed together of ...@@ -138,7 +139,6 @@ This is value ORed together of
1 = Zone reclaim on 1 = Zone reclaim on
2 = Zone reclaim writes dirty pages out 2 = Zone reclaim writes dirty pages out
4 = Zone reclaim swaps pages 4 = Zone reclaim swaps pages
8 = Also do a global slab reclaim pass
zone_reclaim_mode is set during bootup to 1 if it is determined that pages zone_reclaim_mode is set during bootup to 1 if it is determined that pages
from remote zones will cause a measurable performance reduction. The from remote zones will cause a measurable performance reduction. The
...@@ -162,18 +162,13 @@ Allowing regular swap effectively restricts allocations to the local ...@@ -162,18 +162,13 @@ Allowing regular swap effectively restricts allocations to the local
node unless explicitly overridden by memory policies or cpuset node unless explicitly overridden by memory policies or cpuset
configurations. configurations.
It may be advisable to allow slab reclaim if the system makes heavy
use of files and builds up large slab caches. However, the slab
shrink operation is global, may take a long time and free slabs
in all nodes of the system.
============================================================= =============================================================
min_unmapped_ratio: min_unmapped_ratio:
This is available only on NUMA kernels. This is available only on NUMA kernels.
A percentage of the file backed pages in each zone. Zone reclaim will only A percentage of the total pages in each zone. Zone reclaim will only
occur if more than this percentage of pages are file backed and unmapped. occur if more than this percentage of pages are file backed and unmapped.
This is to insure that a minimal amount of local pages is still available for This is to insure that a minimal amount of local pages is still available for
file I/O even if the node is overallocated. file I/O even if the node is overallocated.
...@@ -182,6 +177,24 @@ The default is 1 percent. ...@@ -182,6 +177,24 @@ The default is 1 percent.
============================================================= =============================================================
min_slab_ratio:
This is available only on NUMA kernels.
A percentage of the total pages in each zone. On Zone reclaim
(fallback from the local zone occurs) slabs will be reclaimed if more
than this percentage of pages in a zone are reclaimable slab pages.
This insures that the slab growth stays under control even in NUMA
systems that rarely perform global reclaim.
The default is 5 percent.
Note that slab reclaim is triggered in a per zone / node fashion.
The process of reclaiming slab memory is currently not node specific
and may not be fast.
=============================================================
panic_on_oom panic_on_oom
This enables or disables panic on out-of-memory feature. If this is set to 1, This enables or disables panic on out-of-memory feature. If this is set to 1,
......
...@@ -443,6 +443,23 @@ W: http://people.redhat.com/sgrubb/audit/ ...@@ -443,6 +443,23 @@ W: http://people.redhat.com/sgrubb/audit/
T: git kernel.org:/pub/scm/linux/kernel/git/dwmw2/audit-2.6.git T: git kernel.org:/pub/scm/linux/kernel/git/dwmw2/audit-2.6.git
S: Maintained S: Maintained
AVR32 ARCHITECTURE
P: Atmel AVR32 Support Team
M: avr32@atmel.com
P: Haavard Skinnemoen
M: hskinnemoen@atmel.com
W: http://www.atmel.com/products/AVR32/
W: http://avr32linux.org/
W: http://avrfreaks.net/
S: Supported
AVR32/AT32AP MACHINE SUPPORT
P: Atmel AVR32 Support Team
M: avr32@atmel.com
P: Haavard Skinnemoen
M: hskinnemoen@atmel.com
S: Supported
AX.25 NETWORK LAYER AX.25 NETWORK LAYER
P: Ralf Baechle P: Ralf Baechle
M: ralf@linux-mips.org M: ralf@linux-mips.org
...@@ -2031,6 +2048,13 @@ L: netfilter@lists.netfilter.org ...@@ -2031,6 +2048,13 @@ L: netfilter@lists.netfilter.org
L: netfilter-devel@lists.netfilter.org L: netfilter-devel@lists.netfilter.org
S: Supported S: Supported
NETLABEL
P: Paul Moore
M: paul.moore@hp.com
W: http://netlabel.sf.net
L: netdev@vger.kernel.org
S: Supported
NETROM NETWORK LAYER NETROM NETWORK LAYER
P: Ralf Baechle P: Ralf Baechle
M: ralf@linux-mips.org M: ralf@linux-mips.org
......
...@@ -381,7 +381,7 @@ config ALPHA_EV56 ...@@ -381,7 +381,7 @@ config ALPHA_EV56
config ALPHA_EV56 config ALPHA_EV56
prompt "EV56 CPU (speed >= 333MHz)?" prompt "EV56 CPU (speed >= 333MHz)?"
depends on ALPHA_NORITAKE && ALPHA_PRIMO depends on ALPHA_NORITAKE || ALPHA_PRIMO
config ALPHA_EV56 config ALPHA_EV56
prompt "EV56 CPU (speed >= 400MHz)?" prompt "EV56 CPU (speed >= 400MHz)?"
......
...@@ -270,7 +270,7 @@ callback_init(void * kernel_end) ...@@ -270,7 +270,7 @@ callback_init(void * kernel_end)
void void
paging_init(void) paging_init(void)
{ {
unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0}; unsigned long zones_size[MAX_NR_ZONES] = {0, };
unsigned long dma_pfn, high_pfn; unsigned long dma_pfn, high_pfn;
dma_pfn = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT; dma_pfn = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
......
...@@ -177,7 +177,7 @@ static void unmap_area_sections(unsigned long virt, unsigned long size) ...@@ -177,7 +177,7 @@ static void unmap_area_sections(unsigned long virt, unsigned long size)
* Free the page table, if there was one. * Free the page table, if there was one.
*/ */
if ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_TABLE) if ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_TABLE)
pte_free_kernel(pmd_page_kernel(pmd)); pte_free_kernel(pmd_page_vaddr(pmd));
} }
addr += PGDIR_SIZE; addr += PGDIR_SIZE;
......
#
# For a description of the syntax of this configuration file,
# see Documentation/kbuild/kconfig-language.txt.
#
mainmenu "Linux Kernel Configuration"
config AVR32
bool
default y
# With EMBEDDED=n, we get lots of stuff automatically selected
# that we usually don't need on AVR32.
select EMBEDDED
help
AVR32 is a high-performance 32-bit RISC microprocessor core,
designed for cost-sensitive embedded applications, with particular
emphasis on low power consumption and high code density.
There is an AVR32 Linux project with a web page at
http://avr32linux.org/.
config UID16
bool
config GENERIC_HARDIRQS
bool
default y
config HARDIRQS_SW_RESEND
bool
default y
config GENERIC_IRQ_PROBE
bool
default y
config RWSEM_GENERIC_SPINLOCK
bool
default y
config GENERIC_TIME
bool
default y
config RWSEM_XCHGADD_ALGORITHM
bool
config GENERIC_BUST_SPINLOCK
bool
config GENERIC_HWEIGHT
bool
default y
config GENERIC_CALIBRATE_DELAY
bool
default y
source "init/Kconfig"
menu "System Type and features"
config SUBARCH_AVR32B
bool
config MMU
bool
config PERFORMANCE_COUNTERS
bool
config PLATFORM_AT32AP
bool
select SUBARCH_AVR32B
select MMU
select PERFORMANCE_COUNTERS
choice
prompt "AVR32 CPU type"
default CPU_AT32AP7000
config CPU_AT32AP7000
bool "AT32AP7000"
select PLATFORM_AT32AP
endchoice
#
# CPU Daughterboards for ATSTK1000
config BOARD_ATSTK1002
bool
choice
prompt "AVR32 board type"
default BOARD_ATSTK1000
config BOARD_ATSTK1000
bool "ATSTK1000 evaluation board"
select BOARD_ATSTK1002 if CPU_AT32AP7000
endchoice
choice
prompt "Boot loader type"
default LOADER_U_BOOT
config LOADER_U_BOOT
bool "U-Boot (or similar) bootloader"
endchoice
config LOAD_ADDRESS
hex
default 0x10000000 if LOADER_U_BOOT=y && CPU_AT32AP7000=y
config ENTRY_ADDRESS
hex
default 0x90000000 if LOADER_U_BOOT=y && CPU_AT32AP7000=y
config PHYS_OFFSET
hex
default 0x10000000 if CPU_AT32AP7000=y
source "kernel/Kconfig.preempt"
config HAVE_ARCH_BOOTMEM_NODE
bool
default n
config ARCH_HAVE_MEMORY_PRESENT
bool
default n
config NEED_NODE_MEMMAP_SIZE
bool
default n
config ARCH_FLATMEM_ENABLE
bool
default y
config ARCH_DISCONTIGMEM_ENABLE
bool
default n
config ARCH_SPARSEMEM_ENABLE
bool
default n
source "mm/Kconfig"
config OWNERSHIP_TRACE
bool "Ownership trace support"
default y
help
Say Y to generate an Ownership Trace message on every context switch,
enabling Nexus-compliant debuggers to keep track of the PID of the
currently executing task.
# FPU emulation goes here
source "kernel/Kconfig.hz"
config CMDLINE
string "Default kernel command line"
default ""
help
If you don't have a boot loader capable of passing a command line string
to the kernel, you may specify one here. As a minimum, you should specify
the memory size and the root device (e.g., mem=8M, root=/dev/nfs).
endmenu
menu "Bus options"
config PCI
bool
source "drivers/pci/Kconfig"
source "drivers/pcmcia/Kconfig"
endmenu
menu "Executable file formats"
source "fs/Kconfig.binfmt"
endmenu
source "net/Kconfig"
source "drivers/Kconfig"
source "fs/Kconfig"
source "arch/avr32/Kconfig.debug"
source "security/Kconfig"
source "crypto/Kconfig"
source "lib/Kconfig"
menu "Kernel hacking"
config TRACE_IRQFLAGS_SUPPORT
bool
default y
source "lib/Kconfig.debug"
config KPROBES
bool "Kprobes"
depends on DEBUG_KERNEL
help
Kprobes allows you to trap at almost any kernel address and
execute a callback function. register_kprobe() establishes
a probepoint and specifies the callback. Kprobes is useful
for kernel debugging, non-intrusive instrumentation and testing.
If in doubt, say "N".
endmenu
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 2004-2006 Atmel Corporation.
# Default target when executing plain make
.PHONY: all
all: uImage vmlinux.elf linux.lst
KBUILD_DEFCONFIG := atstk1002_defconfig
CFLAGS += -pipe -fno-builtin -mno-pic
AFLAGS += -mrelax -mno-pic
CFLAGS_MODULE += -mno-relax
LDFLAGS_vmlinux += --relax
cpuflags-$(CONFIG_CPU_AP7000) += -mcpu=ap7000
CFLAGS += $(cpuflags-y)
AFLAGS += $(cpuflags-y)
CHECKFLAGS += -D__avr32__
LIBGCC := $(shell $(CC) $(CFLAGS) -print-libgcc-file-name)
head-$(CONFIG_LOADER_U_BOOT) += arch/avr32/boot/u-boot/head.o
head-y += arch/avr32/kernel/head.o
core-$(CONFIG_PLATFORM_AT32AP) += arch/avr32/mach-at32ap/
core-$(CONFIG_BOARD_ATSTK1000) += arch/avr32/boards/atstk1000/
core-$(CONFIG_LOADER_U_BOOT) += arch/avr32/boot/u-boot/
core-y += arch/avr32/kernel/
core-y += arch/avr32/mm/
libs-y += arch/avr32/lib/ #$(LIBGCC)
archincdir-$(CONFIG_PLATFORM_AT32AP) := arch-at32ap
include/asm-avr32/.arch: $(wildcard include/config/platform/*.h) include/config/auto.conf
@echo ' SYMLINK include/asm-avr32/arch -> include/asm-avr32/$(archincdir-y)'
ifneq ($(KBUILD_SRC),)
$(Q)mkdir -p include/asm-avr32
$(Q)ln -fsn $(srctree)/include/asm-avr32/$(archincdir-y) include/asm-avr32/arch
else
$(Q)ln -fsn $(archincdir-y) include/asm-avr32/arch
endif
@touch $@
archprepare: include/asm-avr32/.arch
BOOT_TARGETS := vmlinux.elf vmlinux.bin uImage uImage.srec
.PHONY: $(BOOT_TARGETS) install
boot := arch/$(ARCH)/boot/images
KBUILD_IMAGE := $(boot)/uImage
vmlinux.elf: KBUILD_IMAGE := $(boot)/vmlinux.elf
vmlinux.cso: KBUILD_IMAGE := $(boot)/vmlinux.cso
uImage.srec: KBUILD_IMAGE := $(boot)/uImage.srec
uImage: KBUILD_IMAGE := $(boot)/uImage
quiet_cmd_listing = LST $@
cmd_listing = avr32-linux-objdump $(OBJDUMPFLAGS) -lS $< > $@
quiet_cmd_disasm = DIS $@
cmd_disasm = avr32-linux-objdump $(OBJDUMPFLAGS) -d $< > $@
vmlinux.elf vmlinux.bin uImage.srec uImage vmlinux.cso: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
install: vmlinux
$(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) $@
linux.s: vmlinux
$(call if_changed,disasm)
linux.lst: vmlinux
$(call if_changed,listing)
define archhelp
@echo '* vmlinux.elf - ELF image with load address 0'
@echo ' vmlinux.cso - PathFinder CSO image'
@echo ' uImage - Create a bootable image for U-Boot'
endef
obj-y += setup.o spi.o flash.o
obj-$(CONFIG_BOARD_ATSTK1002) += atstk1002.o
/*
* ATSTK1002 daughterboard-specific init code
*
* Copyright (C) 2005-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/init.h>
#include <asm/arch/board.h>
struct eth_platform_data __initdata eth0_data = {
.valid = 1,
.mii_phy_addr = 0x10,
.is_rmii = 0,
.hw_addr = { 0x6a, 0x87, 0x71, 0x14, 0xcd, 0xcb },
};
extern struct lcdc_platform_data atstk1000_fb0_data;
static int __init atstk1002_init(void)
{
at32_add_system_devices();
at32_add_device_usart(1); /* /dev/ttyS0 */
at32_add_device_usart(2); /* /dev/ttyS1 */
at32_add_device_usart(3); /* /dev/ttyS2 */
at32_add_device_eth(0, &eth0_data);
at32_add_device_spi(0);
at32_add_device_lcdc(0, &atstk1000_fb0_data);
return 0;
}
postcore_initcall(atstk1002_init);
/*
* ATSTK1000 board-specific flash initialization
*
* Copyright (C) 2005-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/physmap.h>
#include <asm/arch/smc.h>
static struct smc_config flash_config __initdata = {
.ncs_read_setup = 0,
.nrd_setup = 40,
.ncs_write_setup = 0,
.nwe_setup = 10,
.ncs_read_pulse = 80,
.nrd_pulse = 40,
.ncs_write_pulse = 65,
.nwe_pulse = 55,
.read_cycle = 120,
.write_cycle = 120,
.bus_width = 2,
.nrd_controlled = 1,
.nwe_controlled = 1,
.byte_write = 1,
};
static struct mtd_partition flash_parts[] = {
{
.name = "u-boot",
.offset = 0x00000000,
.size = 0x00020000, /* 128 KiB */
.mask_flags = MTD_WRITEABLE,
},
{
.name = "root",
.offset = 0x00020000,
.size = 0x007d0000,
},
{
.name = "env",
.offset = 0x007f0000,
.size = 0x00010000,
.mask_flags = MTD_WRITEABLE,
},
};
static struct physmap_flash_data flash_data = {
.width = 2,
.nr_parts = ARRAY_SIZE(flash_parts),
.parts = flash_parts,
};
static struct resource flash_resource = {
.start = 0x00000000,
.end = 0x007fffff,
.flags = IORESOURCE_MEM,
};
static struct platform_device flash_device = {
.name = "physmap-flash",
.id = 0,
.resource = &flash_resource,
.num_resources = 1,
.dev = {
.platform_data = &flash_data,
},
};
/* This needs to be called after the SMC has been initialized */
static int __init atstk1000_flash_init(void)
{
int ret;
ret = smc_set_configuration(0, &flash_config);
if (ret < 0) {
printk(KERN_ERR "atstk1000: failed to set NOR flash timing\n");
return ret;
}
platform_device_register(&flash_device);
return 0;
}
device_initcall(atstk1000_flash_init);
/*
* ATSTK1000 board-specific setup code.
*
* Copyright (C) 2005-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/bootmem.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/linkage.h>
#include <asm/setup.h>
#include <asm/arch/board.h>
/* Initialized by bootloader-specific startup code. */
struct tag *bootloader_tags __initdata;
struct lcdc_platform_data __initdata atstk1000_fb0_data;
asmlinkage void __init board_early_init(void)
{
extern void sdram_init(void);
#ifdef CONFIG_LOADER_STANDALONE
sdram_init();
#endif
}
void __init board_setup_fbmem(unsigned long fbmem_start,
unsigned long fbmem_size)
{
if (!fbmem_size)
return;
if (!fbmem_start) {
void *fbmem;
fbmem = alloc_bootmem_low_pages(fbmem_size);
fbmem_start = __pa(fbmem);
} else {
pg_data_t *pgdat;
for_each_online_pgdat(pgdat) {
if (fbmem_start >= pgdat->bdata->node_boot_start
&& fbmem_start <= pgdat->bdata->node_low_pfn)
reserve_bootmem_node(pgdat, fbmem_start,
fbmem_size);
}
}
printk("%luKiB framebuffer memory at address 0x%08lx\n",
fbmem_size >> 10, fbmem_start);
atstk1000_fb0_data.fbmem_start = fbmem_start;
atstk1000_fb0_data.fbmem_size = fbmem_size;
}
/*
* ATSTK1000 SPI devices
*
* Copyright (C) 2005 Atmel Norway
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/device.h>
#include <linux/spi/spi.h>
static struct spi_board_info spi_board_info[] __initdata = {
{
.modalias = "ltv350qv",
.max_speed_hz = 16000000,
.bus_num = 0,
.chip_select = 1,
},
};
static int board_init_spi(void)
{
spi_register_board_info(spi_board_info, ARRAY_SIZE(spi_board_info));
return 0;
}
arch_initcall(board_init_spi);
#
# Copyright (C) 2004-2006 Atmel Corporation
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
MKIMAGE := $(srctree)/scripts/mkuboot.sh
extra-y := vmlinux.bin vmlinux.gz
OBJCOPYFLAGS_vmlinux.bin := -O binary
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
$(obj)/vmlinux.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip)
quiet_cmd_uimage = UIMAGE $@
cmd_uimage = $(CONFIG_SHELL) $(MKIMAGE) -A avr32 -O linux -T kernel \
-C gzip -a $(CONFIG_LOAD_ADDRESS) -e $(CONFIG_ENTRY_ADDRESS) \
-n 'Linux-$(KERNELRELEASE)' -d $< $@
targets += uImage uImage.srec
$(obj)/uImage: $(obj)/vmlinux.gz
$(call if_changed,uimage)
@echo ' Image $@ is ready'
OBJCOPYFLAGS_uImage.srec := -I binary -O srec
$(obj)/uImage.srec: $(obj)/uImage
$(call if_changed,objcopy)
OBJCOPYFLAGS_vmlinux.elf := --change-section-lma .text-0x80000000 \
--change-section-lma __ex_table-0x80000000 \
--change-section-lma .rodata-0x80000000 \
--change-section-lma .data-0x80000000 \
--change-section-lma .init-0x80000000 \
--change-section-lma .bss-0x80000000 \
--change-section-lma .initrd-0x80000000 \
--change-section-lma __param-0x80000000 \
--change-section-lma __ksymtab-0x80000000 \
--change-section-lma __ksymtab_gpl-0x80000000 \
--change-section-lma __kcrctab-0x80000000 \
--change-section-lma __kcrctab_gpl-0x80000000 \
--change-section-lma __ksymtab_strings-0x80000000 \
--change-section-lma .got-0x80000000 \
--set-start 0xa0000000
$(obj)/vmlinux.elf: vmlinux FORCE
$(call if_changed,objcopy)
quiet_cmd_sfdwarf = SFDWARF $@
cmd_sfdwarf = sfdwarf $< TO $@ GNUAVR IW $(SFDWARF_FLAGS) > $(obj)/sfdwarf.log
$(obj)/vmlinux.cso: $(obj)/vmlinux.elf FORCE
$(call if_changed,sfdwarf)
install: $(BOOTIMAGE)
sh $(srctree)/install-kernel.sh $<
# Generated files to be removed upon make clean
clean-files := vmlinux* uImage uImage.srec
extra-y := head.o
obj-y := empty.o
/*
* Startup code for use with the u-boot bootloader.
*
* Copyright (C) 2004-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <asm/setup.h>
/*
* The kernel is loaded where we want it to be and all caches
* have just been flushed. We get two parameters from u-boot:
*
* r12 contains a magic number (ATAG_MAGIC)
* r11 points to a tag table providing information about
* the system.
*/
.section .init.text,"ax"
.global _start
_start:
/* Check if the boot loader actually provided a tag table */
lddpc r0, magic_number
cp.w r12, r0
brne no_tag_table
/* Initialize .bss */
lddpc r2, bss_start_addr
lddpc r3, end_addr
mov r0, 0
mov r1, 0
1: st.d r2++, r0
cp r2, r3
brlo 1b
/*
* Save the tag table address for later use. This must be done
* _after_ .bss has been initialized...
*/
lddpc r0, tag_table_addr
st.w r0[0], r11
/* Jump to loader-independent setup code */
rjmp kernel_entry
.align 2
magic_number:
.long ATAG_MAGIC
tag_table_addr:
.long bootloader_tags
bss_start_addr:
.long __bss_start
end_addr:
.long _end
no_tag_table:
sub r12, pc, (. - 2f)
bral panic
2: .asciz "Boot loader didn't provide correct magic number\n"
This diff is collapsed.
#
# Makefile for the Linux/AVR32 kernel.
#
extra-y := head.o vmlinux.lds
obj-$(CONFIG_SUBARCH_AVR32B) += entry-avr32b.o
obj-y += syscall_table.o syscall-stubs.o irq.o
obj-y += setup.o traps.o semaphore.o ptrace.o
obj-y += signal.o sys_avr32.o process.o time.o
obj-y += init_task.o switch_to.o cpu.o
obj-$(CONFIG_MODULES) += module.o avr32_ksyms.o
obj-$(CONFIG_KPROBES) += kprobes.o
USE_STANDARD_AS_RULE := true
%.lds: %.lds.c FORCE
$(call if_changed_dep,cpp_lds_S)
/*
* Generate definitions needed by assembly language modules.
* This code generates raw asm output which is post-processed
* to extract and format the required data.
*/
#include <linux/thread_info.h>
#define DEFINE(sym, val) \
asm volatile("\n->" #sym " %0 " #val : : "i" (val))
#define BLANK() asm volatile("\n->" : : )
#define OFFSET(sym, str, mem) \
DEFINE(sym, offsetof(struct str, mem));
void foo(void)
{
OFFSET(TI_task, thread_info, task);
OFFSET(TI_exec_domain, thread_info, exec_domain);
OFFSET(TI_flags, thread_info, flags);
OFFSET(TI_cpu, thread_info, cpu);
OFFSET(TI_preempt_count, thread_info, preempt_count);
OFFSET(TI_restart_block, thread_info, restart_block);
}
/*
* Export AVR32-specific functions for loadable modules.
*
* Copyright (C) 2004-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/module.h>
#include <asm/checksum.h>
#include <asm/uaccess.h>
#include <asm/delay.h>
/*
* GCC functions
*/
extern unsigned long long __avr32_lsl64(unsigned long long u, unsigned long b);
extern unsigned long long __avr32_lsr64(unsigned long long u, unsigned long b);
extern unsigned long long __avr32_asr64(unsigned long long u, unsigned long b);
EXPORT_SYMBOL(__avr32_lsl64);
EXPORT_SYMBOL(__avr32_lsr64);
EXPORT_SYMBOL(__avr32_asr64);
/*
* String functions
*/
EXPORT_SYMBOL(memset);
EXPORT_SYMBOL(memcpy);
/*
* Userspace access stuff.
*/
EXPORT_SYMBOL(copy_from_user);
EXPORT_SYMBOL(copy_to_user);
EXPORT_SYMBOL(__copy_user);
EXPORT_SYMBOL(strncpy_from_user);
EXPORT_SYMBOL(__strncpy_from_user);
EXPORT_SYMBOL(clear_user);
EXPORT_SYMBOL(__clear_user);
EXPORT_SYMBOL(csum_partial);
EXPORT_SYMBOL(csum_partial_copy_generic);
/* Delay loops (lib/delay.S) */
EXPORT_SYMBOL(__ndelay);
EXPORT_SYMBOL(__udelay);
EXPORT_SYMBOL(__const_udelay);
/* Bit operations (lib/findbit.S) */
EXPORT_SYMBOL(find_first_zero_bit);
EXPORT_SYMBOL(find_next_zero_bit);
EXPORT_SYMBOL(find_first_bit);
EXPORT_SYMBOL(find_next_bit);
EXPORT_SYMBOL(generic_find_next_zero_le_bit);
/*
* Copyright (C) 2005-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/init.h>
#include <linux/sysdev.h>
#include <linux/seq_file.h>
#include <linux/cpu.h>
#include <linux/percpu.h>
#include <linux/param.h>
#include <linux/errno.h>
#include <asm/setup.h>
#include <asm/sysreg.h>
static DEFINE_PER_CPU(struct cpu, cpu_devices);
#ifdef CONFIG_PERFORMANCE_COUNTERS
/*
* XXX: If/when a SMP-capable implementation of AVR32 will ever be
* made, we must make sure that the code executes on the correct CPU.
*/
static ssize_t show_pc0event(struct sys_device *dev, char *buf)
{
unsigned long pccr;
pccr = sysreg_read(PCCR);
return sprintf(buf, "0x%lx\n", (pccr >> 12) & 0x3f);
}
static ssize_t store_pc0event(struct sys_device *dev, const char *buf,
size_t count)
{
unsigned long val;
char *endp;
val = simple_strtoul(buf, &endp, 0);
if (endp == buf || val > 0x3f)
return -EINVAL;
val = (val << 12) | (sysreg_read(PCCR) & 0xfffc0fff);
sysreg_write(PCCR, val);
return count;
}
static ssize_t show_pc0count(struct sys_device *dev, char *buf)
{
unsigned long pcnt0;
pcnt0 = sysreg_read(PCNT0);
return sprintf(buf, "%lu\n", pcnt0);
}
static ssize_t store_pc0count(struct sys_device *dev, const char *buf,
size_t count)
{
unsigned long val;
char *endp;
val = simple_strtoul(buf, &endp, 0);
if (endp == buf)
return -EINVAL;
sysreg_write(PCNT0, val);
return count;
}
static ssize_t show_pc1event(struct sys_device *dev, char *buf)
{
unsigned long pccr;
pccr = sysreg_read(PCCR);
return sprintf(buf, "0x%lx\n", (pccr >> 18) & 0x3f);
}
static ssize_t store_pc1event(struct sys_device *dev, const char *buf,
size_t count)
{
unsigned long val;
char *endp;
val = simple_strtoul(buf, &endp, 0);
if (endp == buf || val > 0x3f)
return -EINVAL;
val = (val << 18) | (sysreg_read(PCCR) & 0xff03ffff);
sysreg_write(PCCR, val);
return count;
}
static ssize_t show_pc1count(struct sys_device *dev, char *buf)
{
unsigned long pcnt1;
pcnt1 = sysreg_read(PCNT1);
return sprintf(buf, "%lu\n", pcnt1);
}
static ssize_t store_pc1count(struct sys_device *dev, const char *buf,
size_t count)
{
unsigned long val;
char *endp;
val = simple_strtoul(buf, &endp, 0);
if (endp == buf)
return -EINVAL;
sysreg_write(PCNT1, val);
return count;
}
static ssize_t show_pccycles(struct sys_device *dev, char *buf)
{
unsigned long pccnt;
pccnt = sysreg_read(PCCNT);
return sprintf(buf, "%lu\n", pccnt);
}
static ssize_t store_pccycles(struct sys_device *dev, const char *buf,
size_t count)
{
unsigned long val;
char *endp;
val = simple_strtoul(buf, &endp, 0);
if (endp == buf)
return -EINVAL;
sysreg_write(PCCNT, val);
return count;
}
static ssize_t show_pcenable(struct sys_device *dev, char *buf)
{
unsigned long pccr;
pccr = sysreg_read(PCCR);
return sprintf(buf, "%c\n", (pccr & 1)?'1':'0');
}
static ssize_t store_pcenable(struct sys_device *dev, const char *buf,
size_t count)
{
unsigned long pccr, val;
char *endp;
val = simple_strtoul(buf, &endp, 0);
if (endp == buf)
return -EINVAL;
if (val)
val = 1;
pccr = sysreg_read(PCCR);
pccr = (pccr & ~1UL) | val;
sysreg_write(PCCR, pccr);
return count;
}
static SYSDEV_ATTR(pc0event, 0600, show_pc0event, store_pc0event);
static SYSDEV_ATTR(pc0count, 0600, show_pc0count, store_pc0count);
static SYSDEV_ATTR(pc1event, 0600, show_pc1event, store_pc1event);
static SYSDEV_ATTR(pc1count, 0600, show_pc1count, store_pc1count);
static SYSDEV_ATTR(pccycles, 0600, show_pccycles, store_pccycles);
static SYSDEV_ATTR(pcenable, 0600, show_pcenable, store_pcenable);
#endif /* CONFIG_PERFORMANCE_COUNTERS */
static int __init topology_init(void)
{
int cpu;
for_each_possible_cpu(cpu) {
struct cpu *c = &per_cpu(cpu_devices, cpu);
register_cpu(c, cpu);
#ifdef CONFIG_PERFORMANCE_COUNTERS
sysdev_create_file(&c->sysdev, &attr_pc0event);
sysdev_create_file(&c->sysdev, &attr_pc0count);
sysdev_create_file(&c->sysdev, &attr_pc1event);
sysdev_create_file(&c->sysdev, &attr_pc1count);
sysdev_create_file(&c->sysdev, &attr_pccycles);
sysdev_create_file(&c->sysdev, &attr_pcenable);
#endif
}
return 0;
}
subsys_initcall(topology_init);
static const char *cpu_names[] = {
"Morgan",
"AP7000",
};
#define NR_CPU_NAMES ARRAY_SIZE(cpu_names)
static const char *arch_names[] = {
"AVR32A",
"AVR32B",
};
#define NR_ARCH_NAMES ARRAY_SIZE(arch_names)
static const char *mmu_types[] = {
"No MMU",
"ITLB and DTLB",
"Shared TLB",
"MPU"
};
void __init setup_processor(void)
{
unsigned long config0, config1;
unsigned cpu_id, cpu_rev, arch_id, arch_rev, mmu_type;
unsigned tmp;
config0 = sysreg_read(CONFIG0); /* 0x0000013e; */
config1 = sysreg_read(CONFIG1); /* 0x01f689a2; */
cpu_id = config0 >> 24;
cpu_rev = (config0 >> 16) & 0xff;
arch_id = (config0 >> 13) & 0x07;
arch_rev = (config0 >> 10) & 0x07;
mmu_type = (config0 >> 7) & 0x03;
boot_cpu_data.arch_type = arch_id;
boot_cpu_data.cpu_type = cpu_id;
boot_cpu_data.arch_revision = arch_rev;
boot_cpu_data.cpu_revision = cpu_rev;
boot_cpu_data.tlb_config = mmu_type;
tmp = (config1 >> 13) & 0x07;
if (tmp) {
boot_cpu_data.icache.ways = 1 << ((config1 >> 10) & 0x07);
boot_cpu_data.icache.sets = 1 << ((config1 >> 16) & 0x0f);
boot_cpu_data.icache.linesz = 1 << (tmp + 1);
}
tmp = (config1 >> 3) & 0x07;
if (tmp) {
boot_cpu_data.dcache.ways = 1 << (config1 & 0x07);
boot_cpu_data.dcache.sets = 1 << ((config1 >> 6) & 0x0f);
boot_cpu_data.dcache.linesz = 1 << (tmp + 1);
}
if ((cpu_id >= NR_CPU_NAMES) || (arch_id >= NR_ARCH_NAMES)) {
printk ("Unknown CPU configuration (ID %02x, arch %02x), "
"continuing anyway...\n",
cpu_id, arch_id);
return;
}
printk ("CPU: %s [%02x] revision %d (%s revision %d)\n",
cpu_names[cpu_id], cpu_id, cpu_rev,
arch_names[arch_id], arch_rev);
printk ("CPU: MMU configuration: %s\n", mmu_types[mmu_type]);
printk ("CPU: features:");
if (config0 & (1 << 6))
printk(" fpu");
if (config0 & (1 << 5))
printk(" java");
if (config0 & (1 << 4))
printk(" perfctr");
if (config0 & (1 << 3))
printk(" ocd");
printk("\n");
}
#ifdef CONFIG_PROC_FS
static int c_show(struct seq_file *m, void *v)
{
unsigned int icache_size, dcache_size;
unsigned int cpu = smp_processor_id();
icache_size = boot_cpu_data.icache.ways *
boot_cpu_data.icache.sets *
boot_cpu_data.icache.linesz;
dcache_size = boot_cpu_data.dcache.ways *
boot_cpu_data.dcache.sets *
boot_cpu_data.dcache.linesz;
seq_printf(m, "processor\t: %d\n", cpu);
if (boot_cpu_data.arch_type < NR_ARCH_NAMES)
seq_printf(m, "cpu family\t: %s revision %d\n",
arch_names[boot_cpu_data.arch_type],
boot_cpu_data.arch_revision);
if (boot_cpu_data.cpu_type < NR_CPU_NAMES)
seq_printf(m, "cpu type\t: %s revision %d\n",
cpu_names[boot_cpu_data.cpu_type],
boot_cpu_data.cpu_revision);
seq_printf(m, "i-cache\t\t: %dK (%u ways x %u sets x %u)\n",
icache_size >> 10,
boot_cpu_data.icache.ways,
boot_cpu_data.icache.sets,
boot_cpu_data.icache.linesz);
seq_printf(m, "d-cache\t\t: %dK (%u ways x %u sets x %u)\n",
dcache_size >> 10,
boot_cpu_data.dcache.ways,
boot_cpu_data.dcache.sets,
boot_cpu_data.dcache.linesz);
seq_printf(m, "bogomips\t: %lu.%02lu\n",
boot_cpu_data.loops_per_jiffy / (500000/HZ),
(boot_cpu_data.loops_per_jiffy / (5000/HZ)) % 100);
return 0;
}
static void *c_start(struct seq_file *m, loff_t *pos)
{
return *pos < 1 ? (void *)1 : NULL;
}
static void *c_next(struct seq_file *m, void *v, loff_t *pos)
{
++*pos;
return NULL;
}
static void c_stop(struct seq_file *m, void *v)
{
}
struct seq_operations cpuinfo_op = {
.start = c_start,
.next = c_next,
.stop = c_stop,
.show = c_show
};
#endif /* CONFIG_PROC_FS */
This diff is collapsed.
/*
* Non-board-specific low-level startup code
*
* Copyright (C) 2004-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/linkage.h>
#include <asm/page.h>
#include <asm/thread_info.h>
#include <asm/sysreg.h>
.section .init.text,"ax"
.global kernel_entry
kernel_entry:
/* Initialize status register */
lddpc r0, init_sr
mtsr SYSREG_SR, r0
/* Set initial stack pointer */
lddpc sp, stack_addr
sub sp, -THREAD_SIZE
#ifdef CONFIG_FRAME_POINTER
/* Mark last stack frame */
mov lr, 0
mov r7, 0
#endif
/* Set up the PIO, SDRAM controller, early printk, etc. */
rcall board_early_init
/* Start the show */
lddpc pc, kernel_start_addr
.align 2
init_sr:
.long 0x007f0000 /* Supervisor mode, everything masked */
stack_addr:
.long init_thread_union
kernel_start_addr:
.long start_kernel
/*
* Copyright (C) 2004-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/sched.h>
#include <linux/init_task.h>
#include <linux/mqueue.h>
#include <asm/pgtable.h>
static struct fs_struct init_fs = INIT_FS;
static struct files_struct init_files = INIT_FILES;
static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
struct mm_struct init_mm = INIT_MM(init_mm);
EXPORT_SYMBOL(init_mm);
/*
* Initial thread structure. Must be aligned on an 8192-byte boundary.
*/
union thread_union init_thread_union
__attribute__((__section__(".data.init_task"))) =
{ INIT_THREAD_INFO(init_task) };
/*
* Initial task structure.
*
* All other task structs will be allocated on slabs in fork.c
*/
struct task_struct init_task = INIT_TASK(init_task);
EXPORT_SYMBOL(init_task);
/*
* Copyright (C) 2004-2006 Atmel Corporation
*
* Based on arch/i386/kernel/irq.c
* Copyright (C) 1992, 1998 Linus Torvalds, Ingo Molnar
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This file contains the code used by various IRQ handling routines:
* asking for different IRQ's should be done through these routines
* instead of just grabbing them. Thus setups with different IRQ numbers
* shouldn't result in any weird surprises, and installing new handlers
* should be easier.
*
* IRQ's are in fact implemented a bit like signal handlers for the kernel.
* Naturally it's not a 1:1 relation, but there are similarities.
*/
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/kernel_stat.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/sysdev.h>
/*
* 'what should we do if we get a hw irq event on an illegal vector'.
* each architecture has to answer this themselves.
*/
void ack_bad_irq(unsigned int irq)
{
printk("unexpected IRQ %u\n", irq);
}
#ifdef CONFIG_PROC_FS
int show_interrupts(struct seq_file *p, void *v)
{
int i = *(loff_t *)v, cpu;
struct irqaction *action;
unsigned long flags;
if (i == 0) {
seq_puts(p, " ");
for_each_online_cpu(cpu)
seq_printf(p, "CPU%d ", cpu);
seq_putc(p, '\n');
}
if (i < NR_IRQS) {
spin_lock_irqsave(&irq_desc[i].lock, flags);
action = irq_desc[i].action;
if (!action)
goto unlock;
seq_printf(p, "%3d: ", i);
for_each_online_cpu(cpu)
seq_printf(p, "%10u ", kstat_cpu(cpu).irqs[i]);
seq_printf(p, " %s", action->name);
for (action = action->next; action; action = action->next)
seq_printf(p, ", %s", action->name);
seq_putc(p, '\n');
unlock:
spin_unlock_irqrestore(&irq_desc[i].lock, flags);
}
return 0;
}
#endif
/*
* Kernel Probes (KProbes)
*
* Copyright (C) 2005-2006 Atmel Corporation
*
* Based on arch/ppc64/kernel/kprobes.c
* Copyright (C) IBM Corporation, 2002, 2004
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/kprobes.h>
#include <linux/ptrace.h>
#include <asm/cacheflush.h>
#include <asm/kdebug.h>
#include <asm/ocd.h>
DEFINE_PER_CPU(struct kprobe *, current_kprobe);
static unsigned long kprobe_status;
static struct pt_regs jprobe_saved_regs;
int __kprobes arch_prepare_kprobe(struct kprobe *p)
{
int ret = 0;
if ((unsigned long)p->addr & 0x01) {
printk("Attempt to register kprobe at an unaligned address\n");
ret = -EINVAL;
}
/* XXX: Might be a good idea to check if p->addr is a valid
* kernel address as well... */
if (!ret) {
pr_debug("copy kprobe at %p\n", p->addr);
memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
p->opcode = *p->addr;
}
return ret;
}
void __kprobes arch_arm_kprobe(struct kprobe *p)
{
pr_debug("arming kprobe at %p\n", p->addr);
*p->addr = BREAKPOINT_INSTRUCTION;
flush_icache_range((unsigned long)p->addr,
(unsigned long)p->addr + sizeof(kprobe_opcode_t));
}
void __kprobes arch_disarm_kprobe(struct kprobe *p)
{
pr_debug("disarming kprobe at %p\n", p->addr);
*p->addr = p->opcode;
flush_icache_range((unsigned long)p->addr,
(unsigned long)p->addr + sizeof(kprobe_opcode_t));
}
static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
{
unsigned long dc;
pr_debug("preparing to singlestep over %p (PC=%08lx)\n",
p->addr, regs->pc);
BUG_ON(!(sysreg_read(SR) & SYSREG_BIT(SR_D)));
dc = __mfdr(DBGREG_DC);
dc |= DC_SS;
__mtdr(DBGREG_DC, dc);
/*
* We must run the instruction from its original location
* since it may actually reference PC.
*
* TODO: Do the instruction replacement directly in icache.
*/
*p->addr = p->opcode;
flush_icache_range((unsigned long)p->addr,
(unsigned long)p->addr + sizeof(kprobe_opcode_t));
}
static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs)
{
unsigned long dc;
pr_debug("resuming execution at PC=%08lx\n", regs->pc);
dc = __mfdr(DBGREG_DC);
dc &= ~DC_SS;
__mtdr(DBGREG_DC, dc);
*p->addr = BREAKPOINT_INSTRUCTION;
flush_icache_range((unsigned long)p->addr,
(unsigned long)p->addr + sizeof(kprobe_opcode_t));
}
static void __kprobes set_current_kprobe(struct kprobe *p)
{
__get_cpu_var(current_kprobe) = p;
}
static int __kprobes kprobe_handler(struct pt_regs *regs)
{
struct kprobe *p;
void *addr = (void *)regs->pc;
int ret = 0;
pr_debug("kprobe_handler: kprobe_running=%d\n",
kprobe_running());
/*
* We don't want to be preempted for the entire
* duration of kprobe processing
*/
preempt_disable();
/* Check that we're not recursing */
if (kprobe_running()) {
p = get_kprobe(addr);
if (p) {
if (kprobe_status == KPROBE_HIT_SS) {
printk("FIXME: kprobe hit while single-stepping!\n");
goto no_kprobe;
}
printk("FIXME: kprobe hit while handling another kprobe\n");
goto no_kprobe;
} else {
p = kprobe_running();
if (p->break_handler && p->break_handler(p, regs))
goto ss_probe;
}
/* If it's not ours, can't be delete race, (we hold lock). */
goto no_kprobe;
}
p = get_kprobe(addr);
if (!p)
goto no_kprobe;
kprobe_status = KPROBE_HIT_ACTIVE;
set_current_kprobe(p);
if (p->pre_handler && p->pre_handler(p, regs))
/* handler has already set things up, so skip ss setup */
return 1;
ss_probe:
prepare_singlestep(p, regs);
kprobe_status = KPROBE_HIT_SS;
return 1;
no_kprobe:
return ret;
}
static int __kprobes post_kprobe_handler(struct pt_regs *regs)
{
struct kprobe *cur = kprobe_running();
pr_debug("post_kprobe_handler, cur=%p\n", cur);
if (!cur)
return 0;
if (cur->post_handler) {
kprobe_status = KPROBE_HIT_SSDONE;
cur->post_handler(cur, regs, 0);
}
resume_execution(cur, regs);
reset_current_kprobe();
preempt_enable_no_resched();
return 1;
}
static int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr)
{
struct kprobe *cur = kprobe_running();
pr_debug("kprobe_fault_handler: trapnr=%d\n", trapnr);
if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
return 1;
if (kprobe_status & KPROBE_HIT_SS) {
resume_execution(cur, regs);
preempt_enable_no_resched();
}
return 0;
}
/*
* Wrapper routine to for handling exceptions.
*/
int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data)
{
struct die_args *args = (struct die_args *)data;
int ret = NOTIFY_DONE;
pr_debug("kprobe_exceptions_notify: val=%lu, data=%p\n",
val, data);
switch (val) {
case DIE_BREAKPOINT:
if (kprobe_handler(args->regs))
ret = NOTIFY_STOP;
break;
case DIE_SSTEP:
if (post_kprobe_handler(args->regs))
ret = NOTIFY_STOP;
break;
case DIE_FAULT:
if (kprobe_running()
&& kprobe_fault_handler(args->regs, args->trapnr))
ret = NOTIFY_STOP;
break;
default:
break;
}
return ret;
}
int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct jprobe *jp = container_of(p, struct jprobe, kp);
memcpy(&jprobe_saved_regs, regs, sizeof(struct pt_regs));
/*
* TODO: We should probably save some of the stack here as
* well, since gcc may pass arguments on the stack for certain
* functions (lots of arguments, large aggregates, varargs)
*/
/* setup return addr to the jprobe handler routine */
regs->pc = (unsigned long)jp->entry;
return 1;
}
void __kprobes jprobe_return(void)
{
asm volatile("breakpoint" ::: "memory");
}
int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{
/*
* FIXME - we should ideally be validating that we got here 'cos
* of the "trap" in jprobe_return() above, before restoring the
* saved regs...
*/
memcpy(regs, &jprobe_saved_regs, sizeof(struct pt_regs));
return 1;
}
int __init arch_init_kprobes(void)
{
printk("KPROBES: Enabling monitor mode (MM|DBE)...\n");
__mtdr(DBGREG_DC, DC_MM | DC_DBE);
/* TODO: Register kretprobe trampoline */
return 0;
}
/*
* AVR32-specific kernel module loader
*
* Copyright (C) 2005-2006 Atmel Corporation
*
* GOT initialization parts are based on the s390 version
* Copyright (C) 2002, 2003 IBM Deutschland Entwicklung GmbH,
* IBM Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/moduleloader.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/elf.h>
#include <linux/vmalloc.h>
void *module_alloc(unsigned long size)
{
if (size == 0)
return NULL;
return vmalloc(size);
}
void module_free(struct module *mod, void *module_region)
{
vfree(mod->arch.syminfo);
mod->arch.syminfo = NULL;
vfree(module_region);
/* FIXME: if module_region == mod->init_region, trim exception
* table entries. */
}
static inline int check_rela(Elf32_Rela *rela, struct module *module,
char *strings, Elf32_Sym *symbols)
{
struct mod_arch_syminfo *info;
info = module->arch.syminfo + ELF32_R_SYM(rela->r_info);
switch (ELF32_R_TYPE(rela->r_info)) {
case R_AVR32_GOT32:
case R_AVR32_GOT16:
case R_AVR32_GOT8:
case R_AVR32_GOT21S:
case R_AVR32_GOT18SW: /* mcall */
case R_AVR32_GOT16S: /* ld.w */
if (rela->r_addend != 0) {
printk(KERN_ERR
"GOT relocation against %s at offset %u with addend\n",
strings + symbols[ELF32_R_SYM(rela->r_info)].st_name,
rela->r_offset);
return -ENOEXEC;
}
if (info->got_offset == -1UL) {
info->got_offset = module->arch.got_size;
module->arch.got_size += sizeof(void *);
}
pr_debug("GOT[%3lu] %s\n", info->got_offset,
strings + symbols[ELF32_R_SYM(rela->r_info)].st_name);
break;
}
return 0;
}
int module_frob_arch_sections(Elf_Ehdr *hdr, Elf_Shdr *sechdrs,
char *secstrings, struct module *module)
{
Elf32_Shdr *symtab;
Elf32_Sym *symbols;
Elf32_Rela *rela;
char *strings;
int nrela, i, j;
int ret;
/* Find the symbol table */
symtab = NULL;
for (i = 0; i < hdr->e_shnum; i++)
switch (sechdrs[i].sh_type) {
case SHT_SYMTAB:
symtab = &sechdrs[i];
break;
}
if (!symtab) {
printk(KERN_ERR "module %s: no symbol table\n", module->name);
return -ENOEXEC;
}
/* Allocate room for one syminfo structure per symbol. */
module->arch.nsyms = symtab->sh_size / sizeof(Elf_Sym);
module->arch.syminfo = vmalloc(module->arch.nsyms
* sizeof(struct mod_arch_syminfo));
if (!module->arch.syminfo)
return -ENOMEM;
symbols = (void *)hdr + symtab->sh_offset;
strings = (void *)hdr + sechdrs[symtab->sh_link].sh_offset;
for (i = 0; i < module->arch.nsyms; i++) {
if (symbols[i].st_shndx == SHN_UNDEF &&
strcmp(strings + symbols[i].st_name,
"_GLOBAL_OFFSET_TABLE_") == 0)
/* "Define" it as absolute. */
symbols[i].st_shndx = SHN_ABS;
module->arch.syminfo[i].got_offset = -1UL;
module->arch.syminfo[i].got_initialized = 0;
}
/* Allocate GOT entries for symbols that need it. */
module->arch.got_size = 0;
for (i = 0; i < hdr->e_shnum; i++) {
if (sechdrs[i].sh_type != SHT_RELA)
continue;
nrela = sechdrs[i].sh_size / sizeof(Elf32_Rela);
rela = (void *)hdr + sechdrs[i].sh_offset;
for (j = 0; j < nrela; j++) {
ret = check_rela(rela + j, module,
strings, symbols);
if (ret)
goto out_free_syminfo;
}
}
/*
* Increase core size to make room for GOT and set start
* offset for GOT.
*/
module->core_size = ALIGN(module->core_size, 4);
module->arch.got_offset = module->core_size;
module->core_size += module->arch.got_size;
return 0;
out_free_syminfo:
vfree(module->arch.syminfo);
module->arch.syminfo = NULL;
return ret;
}
static inline int reloc_overflow(struct module *module, const char *reloc_name,
Elf32_Addr relocation)
{
printk(KERN_ERR "module %s: Value %lx does not fit relocation %s\n",
module->name, (unsigned long)relocation, reloc_name);
return -ENOEXEC;
}
#define get_u16(loc) (*((uint16_t *)loc))
#define put_u16(loc, val) (*((uint16_t *)loc) = (val))
int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab,
unsigned int symindex, unsigned int relindex,
struct module *module)
{
Elf32_Shdr *symsec = sechdrs + symindex;
Elf32_Shdr *relsec = sechdrs + relindex;
Elf32_Shdr *dstsec = sechdrs + relsec->sh_info;
Elf32_Rela *rel = (void *)relsec->sh_addr;
unsigned int i;
int ret = 0;
for (i = 0; i < relsec->sh_size / sizeof(Elf32_Rela); i++, rel++) {
struct mod_arch_syminfo *info;
Elf32_Sym *sym;
Elf32_Addr relocation;
uint32_t *location;
uint32_t value;
location = (void *)dstsec->sh_addr + rel->r_offset;
sym = (Elf32_Sym *)symsec->sh_addr + ELF32_R_SYM(rel->r_info);
relocation = sym->st_value + rel->r_addend;
info = module->arch.syminfo + ELF32_R_SYM(rel->r_info);
/* Initialize GOT entry if necessary */
switch (ELF32_R_TYPE(rel->r_info)) {
case R_AVR32_GOT32:
case R_AVR32_GOT16:
case R_AVR32_GOT8:
case R_AVR32_GOT21S:
case R_AVR32_GOT18SW:
case R_AVR32_GOT16S:
if (!info->got_initialized) {
Elf32_Addr *gotent;
gotent = (module->module_core
+ module->arch.got_offset
+ info->got_offset);
*gotent = relocation;
info->got_initialized = 1;
}
relocation = info->got_offset;
break;
}
switch (ELF32_R_TYPE(rel->r_info)) {
case R_AVR32_32:
case R_AVR32_32_CPENT:
*location = relocation;
break;
case R_AVR32_22H_PCREL:
relocation -= (Elf32_Addr)location;
if ((relocation & 0xffe00001) != 0
&& (relocation & 0xffc00001) != 0xffc00000)
return reloc_overflow(module,
"R_AVR32_22H_PCREL",
relocation);
relocation >>= 1;
value = *location;
value = ((value & 0xe1ef0000)
| (relocation & 0xffff)
| ((relocation & 0x10000) << 4)
| ((relocation & 0x1e0000) << 8));
*location = value;
break;
case R_AVR32_11H_PCREL:
relocation -= (Elf32_Addr)location;
if ((relocation & 0xfffffc01) != 0
&& (relocation & 0xfffff801) != 0xfffff800)
return reloc_overflow(module,
"R_AVR32_11H_PCREL",
relocation);
value = get_u16(location);
value = ((value & 0xf00c)
| ((relocation & 0x1fe) << 3)
| ((relocation & 0x600) >> 9));
put_u16(location, value);
break;
case R_AVR32_9H_PCREL:
relocation -= (Elf32_Addr)location;
if ((relocation & 0xffffff01) != 0
&& (relocation & 0xfffffe01) != 0xfffffe00)
return reloc_overflow(module,
"R_AVR32_9H_PCREL",
relocation);
value = get_u16(location);
value = ((value & 0xf00f)
| ((relocation & 0x1fe) << 3));
put_u16(location, value);
break;
case R_AVR32_9UW_PCREL:
relocation -= ((Elf32_Addr)location) & 0xfffffffc;
if ((relocation & 0xfffffc03) != 0)
return reloc_overflow(module,
"R_AVR32_9UW_PCREL",
relocation);
value = get_u16(location);
value = ((value & 0xf80f)
| ((relocation & 0x1fc) << 2));
put_u16(location, value);
break;
case R_AVR32_GOTPC:
/*
* R6 = PC - (PC - GOT)
*
* At this point, relocation contains the
* value of PC. Just subtract the value of
* GOT, and we're done.
*/
pr_debug("GOTPC: PC=0x%lx, got_offset=0x%lx, core=0x%p\n",
relocation, module->arch.got_offset,
module->module_core);
relocation -= ((unsigned long)module->module_core
+ module->arch.got_offset);
*location = relocation;
break;
case R_AVR32_GOT18SW:
if ((relocation & 0xfffe0003) != 0
&& (relocation & 0xfffc0003) != 0xffff0000)
return reloc_overflow(module, "R_AVR32_GOT18SW",
relocation);
relocation >>= 2;
/* fall through */
case R_AVR32_GOT16S:
if ((relocation & 0xffff8000) != 0
&& (relocation & 0xffff0000) != 0xffff0000)
return reloc_overflow(module, "R_AVR32_GOT16S",
relocation);
pr_debug("GOT reloc @ 0x%lx -> %lu\n",
rel->r_offset, relocation);
value = *location;
value = ((value & 0xffff0000)
| (relocation & 0xffff));
*location = value;
break;
default:
printk(KERN_ERR "module %s: Unknown relocation: %u\n",
module->name, ELF32_R_TYPE(rel->r_info));
return -ENOEXEC;
}
}
return ret;
}
int apply_relocate(Elf32_Shdr *sechdrs, const char *strtab,
unsigned int symindex, unsigned int relindex,
struct module *module)
{
printk(KERN_ERR "module %s: REL relocations are not supported\n",
module->name);
return -ENOEXEC;
}
int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs,
struct module *module)
{
vfree(module->arch.syminfo);
module->arch.syminfo = NULL;
return 0;
}
void module_arch_cleanup(struct module *module)
{
}
/*
* Copyright (C) 2004-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/sched.h>
#include <linux/module.h>
#include <linux/kallsyms.h>
#include <linux/fs.h>
#include <linux/ptrace.h>
#include <linux/reboot.h>
#include <linux/unistd.h>
#include <asm/sysreg.h>
#include <asm/ocd.h>
void (*pm_power_off)(void) = NULL;
EXPORT_SYMBOL(pm_power_off);
/*
* This file handles the architecture-dependent parts of process handling..
*/
void cpu_idle(void)
{
/* endless idle loop with no priority at all */
while (1) {
/* TODO: Enter sleep mode */
while (!need_resched())
cpu_relax();
preempt_enable_no_resched();
schedule();
preempt_disable();
}
}
void machine_halt(void)
{
}
void machine_power_off(void)
{
}
void machine_restart(char *cmd)
{
__mtdr(DBGREG_DC, DC_DBE);
__mtdr(DBGREG_DC, DC_RES);
while (1) ;
}
/*
* PC is actually discarded when returning from a system call -- the
* return address must be stored in LR. This function will make sure
* LR points to do_exit before starting the thread.
*
* Also, when returning from fork(), r12 is 0, so we must copy the
* argument as well.
*
* r0 : The argument to the main thread function
* r1 : The address of do_exit
* r2 : The address of the main thread function
*/
asmlinkage extern void kernel_thread_helper(void);
__asm__(" .type kernel_thread_helper, @function\n"
"kernel_thread_helper:\n"
" mov r12, r0\n"
" mov lr, r2\n"
" mov pc, r1\n"
" .size kernel_thread_helper, . - kernel_thread_helper");
int kernel_thread(int (*fn)(void *), void *arg, unsigned long flags)
{
struct pt_regs regs;
memset(&regs, 0, sizeof(regs));
regs.r0 = (unsigned long)arg;
regs.r1 = (unsigned long)fn;
regs.r2 = (unsigned long)do_exit;
regs.lr = (unsigned long)kernel_thread_helper;
regs.pc = (unsigned long)kernel_thread_helper;
regs.sr = MODE_SUPERVISOR;
return do_fork(flags | CLONE_VM | CLONE_UNTRACED,
0, &regs, 0, NULL, NULL);
}
EXPORT_SYMBOL(kernel_thread);
/*
* Free current thread data structures etc
*/
void exit_thread(void)
{
/* nothing to do */
}
void flush_thread(void)
{
/* nothing to do */
}
void release_thread(struct task_struct *dead_task)
{
/* do nothing */
}
static const char *cpu_modes[] = {
"Application", "Supervisor", "Interrupt level 0", "Interrupt level 1",
"Interrupt level 2", "Interrupt level 3", "Exception", "NMI"
};
void show_regs(struct pt_regs *regs)
{
unsigned long sp = regs->sp;
unsigned long lr = regs->lr;
unsigned long mode = (regs->sr & MODE_MASK) >> MODE_SHIFT;
if (!user_mode(regs))
sp = (unsigned long)regs + FRAME_SIZE_FULL;
print_symbol("PC is at %s\n", instruction_pointer(regs));
print_symbol("LR is at %s\n", lr);
printk("pc : [<%08lx>] lr : [<%08lx>] %s\n"
"sp : %08lx r12: %08lx r11: %08lx\n",
instruction_pointer(regs),
lr, print_tainted(), sp, regs->r12, regs->r11);
printk("r10: %08lx r9 : %08lx r8 : %08lx\n",
regs->r10, regs->r9, regs->r8);
printk("r7 : %08lx r6 : %08lx r5 : %08lx r4 : %08lx\n",
regs->r7, regs->r6, regs->r5, regs->r4);
printk("r3 : %08lx r2 : %08lx r1 : %08lx r0 : %08lx\n",
regs->r3, regs->r2, regs->r1, regs->r0);
printk("Flags: %c%c%c%c%c\n",
regs->sr & SR_Q ? 'Q' : 'q',
regs->sr & SR_V ? 'V' : 'v',
regs->sr & SR_N ? 'N' : 'n',
regs->sr & SR_Z ? 'Z' : 'z',
regs->sr & SR_C ? 'C' : 'c');
printk("Mode bits: %c%c%c%c%c%c%c%c%c\n",
regs->sr & SR_H ? 'H' : 'h',
regs->sr & SR_R ? 'R' : 'r',
regs->sr & SR_J ? 'J' : 'j',
regs->sr & SR_EM ? 'E' : 'e',
regs->sr & SR_I3M ? '3' : '.',
regs->sr & SR_I2M ? '2' : '.',
regs->sr & SR_I1M ? '1' : '.',
regs->sr & SR_I0M ? '0' : '.',
regs->sr & SR_GM ? 'G' : 'g');
printk("CPU Mode: %s\n", cpu_modes[mode]);
show_trace(NULL, (unsigned long *)sp, regs);
}
EXPORT_SYMBOL(show_regs);
/* Fill in the fpu structure for a core dump. This is easy -- we don't have any */
int dump_fpu(struct pt_regs *regs, elf_fpregset_t *fpu)
{
/* Not valid */
return 0;
}
asmlinkage void ret_from_fork(void);
int copy_thread(int nr, unsigned long clone_flags, unsigned long usp,
unsigned long unused,
struct task_struct *p, struct pt_regs *regs)
{
struct pt_regs *childregs;
childregs = ((struct pt_regs *)(THREAD_SIZE + (unsigned long)p->thread_info)) - 1;
*childregs = *regs;
if (user_mode(regs))
childregs->sp = usp;
else
childregs->sp = (unsigned long)p->thread_info + THREAD_SIZE;
childregs->r12 = 0; /* Set return value for child */
p->thread.cpu_context.sr = MODE_SUPERVISOR | SR_GM;
p->thread.cpu_context.ksp = (unsigned long)childregs;
p->thread.cpu_context.pc = (unsigned long)ret_from_fork;
return 0;
}
/* r12-r8 are dummy parameters to force the compiler to use the stack */
asmlinkage int sys_fork(struct pt_regs *regs)
{
return do_fork(SIGCHLD, regs->sp, regs, 0, NULL, NULL);
}
asmlinkage int sys_clone(unsigned long clone_flags, unsigned long newsp,
unsigned long parent_tidptr,
unsigned long child_tidptr, struct pt_regs *regs)
{
if (!newsp)
newsp = regs->sp;
return do_fork(clone_flags, newsp, regs, 0,
(int __user *)parent_tidptr,
(int __user *)child_tidptr);
}
asmlinkage int sys_vfork(struct pt_regs *regs)
{
return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->sp, regs,
0, NULL, NULL);
}
asmlinkage int sys_execve(char __user *ufilename, char __user *__user *uargv,
char __user *__user *uenvp, struct pt_regs *regs)
{
int error;
char *filename;
filename = getname(ufilename);
error = PTR_ERR(filename);
if (IS_ERR(filename))
goto out;
error = do_execve(filename, uargv, uenvp, regs);
if (error == 0)
current->ptrace &= ~PT_DTRACE;
putname(filename);
out:
return error;
}
/*
* This function is supposed to answer the question "who called
* schedule()?"
*/
unsigned long get_wchan(struct task_struct *p)
{
unsigned long pc;
unsigned long stack_page;
if (!p || p == current || p->state == TASK_RUNNING)
return 0;
stack_page = (unsigned long)p->thread_info;
BUG_ON(!stack_page);
/*
* The stored value of PC is either the address right after
* the call to __switch_to() or ret_from_fork.
*/
pc = thread_saved_pc(p);
if (in_sched_functions(pc)) {
#ifdef CONFIG_FRAME_POINTER
unsigned long fp = p->thread.cpu_context.r7;
BUG_ON(fp < stack_page || fp > (THREAD_SIZE + stack_page));
pc = *(unsigned long *)fp;
#else
/*
* We depend on the frame size of schedule here, which
* is actually quite ugly. It might be possible to
* determine the frame size automatically at build
* time by doing this:
* - compile sched.c
* - disassemble the resulting sched.o
* - look for 'sub sp,??' shortly after '<schedule>:'
*/
unsigned long sp = p->thread.cpu_context.ksp + 16;
BUG_ON(sp < stack_page || sp > (THREAD_SIZE + stack_page));
pc = *(unsigned long *)sp;
#endif
}
return pc;
}
/*
* Copyright (C) 2004-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#undef DEBUG
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/smp_lock.h>
#include <linux/ptrace.h>
#include <linux/errno.h>
#include <linux/user.h>
#include <linux/security.h>
#include <linux/unistd.h>
#include <linux/notifier.h>
#include <asm/traps.h>
#include <asm/uaccess.h>
#include <asm/ocd.h>
#include <asm/mmu_context.h>
#include <asm/kdebug.h>
static struct pt_regs *get_user_regs(struct task_struct *tsk)
{
return (struct pt_regs *)((unsigned long) tsk->thread_info +
THREAD_SIZE - sizeof(struct pt_regs));
}
static void ptrace_single_step(struct task_struct *tsk)
{
pr_debug("ptrace_single_step: pid=%u, SR=0x%08lx\n",
tsk->pid, tsk->thread.cpu_context.sr);
if (!(tsk->thread.cpu_context.sr & SR_D)) {
/*
* Set a breakpoint at the current pc to force the
* process into debug mode. The syscall/exception
* exit code will set a breakpoint at the return
* address when this flag is set.
*/
pr_debug("ptrace_single_step: Setting TIF_BREAKPOINT\n");
set_tsk_thread_flag(tsk, TIF_BREAKPOINT);
}
/* The monitor code will do the actual step for us */
set_tsk_thread_flag(tsk, TIF_SINGLE_STEP);
}
/*
* Called by kernel/ptrace.c when detaching
*
* Make sure any single step bits, etc. are not set
*/
void ptrace_disable(struct task_struct *child)
{
clear_tsk_thread_flag(child, TIF_SINGLE_STEP);
}
/*
* Handle hitting a breakpoint
*/
static void ptrace_break(struct task_struct *tsk, struct pt_regs *regs)
{
siginfo_t info;
info.si_signo = SIGTRAP;
info.si_errno = 0;
info.si_code = TRAP_BRKPT;
info.si_addr = (void __user *)instruction_pointer(regs);
pr_debug("ptrace_break: Sending SIGTRAP to PID %u (pc = 0x%p)\n",
tsk->pid, info.si_addr);
force_sig_info(SIGTRAP, &info, tsk);
}
/*
* Read the word at offset "offset" into the task's "struct user". We
* actually access the pt_regs struct stored on the kernel stack.
*/
static int ptrace_read_user(struct task_struct *tsk, unsigned long offset,
unsigned long __user *data)
{
unsigned long *regs;
unsigned long value;
pr_debug("ptrace_read_user(%p, %#lx, %p)\n",
tsk, offset, data);
if (offset & 3 || offset >= sizeof(struct user)) {
printk("ptrace_read_user: invalid offset 0x%08lx\n", offset);
return -EIO;
}
regs = (unsigned long *)get_user_regs(tsk);
value = 0;
if (offset < sizeof(struct pt_regs))
value = regs[offset / sizeof(regs[0])];
return put_user(value, data);
}
/*
* Write the word "value" to offset "offset" into the task's "struct
* user". We actually access the pt_regs struct stored on the kernel
* stack.
*/
static int ptrace_write_user(struct task_struct *tsk, unsigned long offset,
unsigned long value)
{
unsigned long *regs;
if (offset & 3 || offset >= sizeof(struct user)) {
printk("ptrace_write_user: invalid offset 0x%08lx\n", offset);
return -EIO;
}
if (offset >= sizeof(struct pt_regs))
return 0;
regs = (unsigned long *)get_user_regs(tsk);
regs[offset / sizeof(regs[0])] = value;
return 0;
}
static int ptrace_getregs(struct task_struct *tsk, void __user *uregs)
{
struct pt_regs *regs = get_user_regs(tsk);
return copy_to_user(uregs, regs, sizeof(*regs)) ? -EFAULT : 0;
}
static int ptrace_setregs(struct task_struct *tsk, const void __user *uregs)
{
struct pt_regs newregs;
int ret;
ret = -EFAULT;
if (copy_from_user(&newregs, uregs, sizeof(newregs)) == 0) {
struct pt_regs *regs = get_user_regs(tsk);
ret = -EINVAL;
if (valid_user_regs(&newregs)) {
*regs = newregs;
ret = 0;
}
}
return ret;
}
long arch_ptrace(struct task_struct *child, long request, long addr, long data)
{
unsigned long tmp;
int ret;
pr_debug("arch_ptrace(%ld, %ld, %#lx, %#lx)\n",
request, child->pid, addr, data);
pr_debug("ptrace: Enabling monitor mode...\n");
__mtdr(DBGREG_DC, __mfdr(DBGREG_DC) | DC_MM | DC_DBE);
switch (request) {
/* Read the word at location addr in the child process */
case PTRACE_PEEKTEXT:
case PTRACE_PEEKDATA:
ret = access_process_vm(child, addr, &tmp, sizeof(tmp), 0);
if (ret == sizeof(tmp))
ret = put_user(tmp, (unsigned long __user *)data);
else
ret = -EIO;
break;
case PTRACE_PEEKUSR:
ret = ptrace_read_user(child, addr,
(unsigned long __user *)data);
break;
/* Write the word in data at location addr */
case PTRACE_POKETEXT:
case PTRACE_POKEDATA:
ret = access_process_vm(child, addr, &data, sizeof(data), 1);
if (ret == sizeof(data))
ret = 0;
else
ret = -EIO;
break;
case PTRACE_POKEUSR:
ret = ptrace_write_user(child, addr, data);
break;
/* continue and stop at next (return from) syscall */
case PTRACE_SYSCALL:
/* restart after signal */
case PTRACE_CONT:
ret = -EIO;
if (!valid_signal(data))
break;
if (request == PTRACE_SYSCALL)
set_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
else
clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
child->exit_code = data;
/* XXX: Are we sure no breakpoints are active here? */
wake_up_process(child);
ret = 0;
break;
/*
* Make the child exit. Best I can do is send it a
* SIGKILL. Perhaps it should be put in the status that it
* wants to exit.
*/
case PTRACE_KILL:
ret = 0;
if (child->exit_state == EXIT_ZOMBIE)
break;
child->exit_code = SIGKILL;
wake_up_process(child);
break;
/*
* execute single instruction.
*/
case PTRACE_SINGLESTEP:
ret = -EIO;
if (!valid_signal(data))
break;
clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
ptrace_single_step(child);
child->exit_code = data;
wake_up_process(child);
ret = 0;
break;
/* Detach a process that was attached */
case PTRACE_DETACH:
ret = ptrace_detach(child, data);
break;
case PTRACE_GETREGS:
ret = ptrace_getregs(child, (void __user *)data);
break;
case PTRACE_SETREGS:
ret = ptrace_setregs(child, (const void __user *)data);
break;
default:
ret = ptrace_request(child, request, addr, data);
break;
}
pr_debug("sys_ptrace returning %d (DC = 0x%08lx)\n", ret, __mfdr(DBGREG_DC));
return ret;
}
asmlinkage void syscall_trace(void)
{
pr_debug("syscall_trace called\n");
if (!test_thread_flag(TIF_SYSCALL_TRACE))
return;
if (!(current->ptrace & PT_PTRACED))
return;
pr_debug("syscall_trace: notifying parent\n");
/* The 0x80 provides a way for the tracing parent to
* distinguish between a syscall stop and SIGTRAP delivery */
ptrace_notify(SIGTRAP | ((current->ptrace & PT_TRACESYSGOOD)
? 0x80 : 0));
/*
* this isn't the same as continuing with a signal, but it
* will do for normal use. strace only continues with a
* signal if the stopping signal is not SIGTRAP. -brl
*/
if (current->exit_code) {
pr_debug("syscall_trace: sending signal %d to PID %u\n",
current->exit_code, current->pid);
send_sig(current->exit_code, current, 1);
current->exit_code = 0;
}
}
asmlinkage void do_debug_priv(struct pt_regs *regs)
{
unsigned long dc, ds;
unsigned long die_val;
ds = __mfdr(DBGREG_DS);
pr_debug("do_debug_priv: pc = %08lx, ds = %08lx\n", regs->pc, ds);
if (ds & DS_SSS)
die_val = DIE_SSTEP;
else
die_val = DIE_BREAKPOINT;
if (notify_die(die_val, regs, 0, SIGTRAP) == NOTIFY_STOP)
return;
if (likely(ds & DS_SSS)) {
extern void itlb_miss(void);
extern void tlb_miss_common(void);
struct thread_info *ti;
dc = __mfdr(DBGREG_DC);
dc &= ~DC_SS;
__mtdr(DBGREG_DC, dc);
ti = current_thread_info();
ti->flags |= _TIF_BREAKPOINT;
/* The TLB miss handlers don't check thread flags */
if ((regs->pc >= (unsigned long)&itlb_miss)
&& (regs->pc <= (unsigned long)&tlb_miss_common)) {
__mtdr(DBGREG_BWA2A, sysreg_read(RAR_EX));
__mtdr(DBGREG_BWC2A, 0x40000001 | (get_asid() << 1));
}
/*
* If we're running in supervisor mode, the breakpoint
* will take us where we want directly, no need to
* single step.
*/
if ((regs->sr & MODE_MASK) != MODE_SUPERVISOR)
ti->flags |= TIF_SINGLE_STEP;
} else {
panic("Unable to handle debug trap at pc = %08lx\n",
regs->pc);
}
}
/*
* Handle breakpoints, single steps and other debuggy things. To keep
* things simple initially, we run with interrupts and exceptions
* disabled all the time.
*/
asmlinkage void do_debug(struct pt_regs *regs)
{
unsigned long dc, ds;
ds = __mfdr(DBGREG_DS);
pr_debug("do_debug: pc = %08lx, ds = %08lx\n", regs->pc, ds);
if (test_thread_flag(TIF_BREAKPOINT)) {
pr_debug("TIF_BREAKPOINT set\n");
/* We're taking care of it */
clear_thread_flag(TIF_BREAKPOINT);
__mtdr(DBGREG_BWC2A, 0);
}
if (test_thread_flag(TIF_SINGLE_STEP)) {
pr_debug("TIF_SINGLE_STEP set, ds = 0x%08lx\n", ds);
if (ds & DS_SSS) {
dc = __mfdr(DBGREG_DC);
dc &= ~DC_SS;
__mtdr(DBGREG_DC, dc);
clear_thread_flag(TIF_SINGLE_STEP);
ptrace_break(current, regs);
}
} else {
/* regular breakpoint */
ptrace_break(current, regs);
}
}
/*
* AVR32 sempahore implementation.
*
* Copyright (C) 2004-2006 Atmel Corporation
*
* Based on linux/arch/i386/kernel/semaphore.c
* Copyright (C) 1999 Linus Torvalds
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/sched.h>
#include <linux/errno.h>
#include <linux/module.h>
#include <asm/semaphore.h>
#include <asm/atomic.h>
/*
* Semaphores are implemented using a two-way counter:
* The "count" variable is decremented for each process
* that tries to acquire the semaphore, while the "sleeping"
* variable is a count of such acquires.
*
* Notably, the inline "up()" and "down()" functions can
* efficiently test if they need to do any extra work (up
* needs to do something only if count was negative before
* the increment operation.
*
* "sleeping" and the contention routine ordering is protected
* by the spinlock in the semaphore's waitqueue head.
*
* Note that these functions are only called when there is
* contention on the lock, and as such all this is the
* "non-critical" part of the whole semaphore business. The
* critical part is the inline stuff in <asm/semaphore.h>
* where we want to avoid any extra jumps and calls.
*/
/*
* Logic:
* - only on a boundary condition do we need to care. When we go
* from a negative count to a non-negative, we wake people up.
* - when we go from a non-negative count to a negative do we
* (a) synchronize with the "sleeper" count and (b) make sure
* that we're on the wakeup list before we synchronize so that
* we cannot lose wakeup events.
*/
void __up(struct semaphore *sem)
{
wake_up(&sem->wait);
}
EXPORT_SYMBOL(__up);
void __sched __down(struct semaphore *sem)
{
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
unsigned long flags;
tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irqsave(&sem->wait.lock, flags);
add_wait_queue_exclusive_locked(&sem->wait, &wait);
sem->sleepers++;
for (;;) {
int sleepers = sem->sleepers;
/*
* Add "everybody else" into it. They aren't
* playing, because we own the spinlock in
* the wait_queue_head.
*/
if (atomic_add_return(sleepers - 1, &sem->count) >= 0) {
sem->sleepers = 0;
break;
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irqrestore(&sem->wait.lock, flags);
schedule();
spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_UNINTERRUPTIBLE;
}
remove_wait_queue_locked(&sem->wait, &wait);
wake_up_locked(&sem->wait);
spin_unlock_irqrestore(&sem->wait.lock, flags);
tsk->state = TASK_RUNNING;
}
EXPORT_SYMBOL(__down);
int __sched __down_interruptible(struct semaphore *sem)
{
int retval = 0;
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
unsigned long flags;
tsk->state = TASK_INTERRUPTIBLE;
spin_lock_irqsave(&sem->wait.lock, flags);
add_wait_queue_exclusive_locked(&sem->wait, &wait);
sem->sleepers++;
for (;;) {
int sleepers = sem->sleepers;
/*
* With signals pending, this turns into the trylock
* failure case - we won't be sleeping, and we can't
* get the lock as it has contention. Just correct the
* count and exit.
*/
if (signal_pending(current)) {
retval = -EINTR;
sem->sleepers = 0;
atomic_add(sleepers, &sem->count);
break;
}
/*
* Add "everybody else" into it. They aren't
* playing, because we own the spinlock in
* the wait_queue_head.
*/
if (atomic_add_return(sleepers - 1, &sem->count) >= 0) {
sem->sleepers = 0;
break;
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irqrestore(&sem->wait.lock, flags);
schedule();
spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_INTERRUPTIBLE;
}
remove_wait_queue_locked(&sem->wait, &wait);
wake_up_locked(&sem->wait);
spin_unlock_irqrestore(&sem->wait.lock, flags);
tsk->state = TASK_RUNNING;
return retval;
}
EXPORT_SYMBOL(__down_interruptible);
/*
* Copyright (C) 2004-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/clk.h>
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/console.h>
#include <linux/ioport.h>
#include <linux/bootmem.h>
#include <linux/fs.h>
#include <linux/module.h>
#include <linux/root_dev.h>
#include <linux/cpu.h>
#include <asm/sections.h>
#include <asm/processor.h>
#include <asm/pgtable.h>
#include <asm/setup.h>
#include <asm/sysreg.h>
#include <asm/arch/board.h>
#include <asm/arch/init.h>
extern int root_mountflags;
/*
* Bootloader-provided information about physical memory
*/
struct tag_mem_range *mem_phys;
struct tag_mem_range *mem_reserved;
struct tag_mem_range *mem_ramdisk;
/*
* Initialize loops_per_jiffy as 5000000 (500MIPS).
* Better make it too large than too small...
*/
struct avr32_cpuinfo boot_cpu_data = {
.loops_per_jiffy = 5000000
};
EXPORT_SYMBOL(boot_cpu_data);
static char command_line[COMMAND_LINE_SIZE];
/*
* Should be more than enough, but if you have a _really_ complex
* setup, you might need to increase the size of this...
*/
static struct tag_mem_range __initdata mem_range_cache[32];
static unsigned mem_range_next_free;
/*
* Standard memory resources
*/
static struct resource mem_res[] = {
{
.name = "Kernel code",
.start = 0,
.end = 0,
.flags = IORESOURCE_MEM
},
{
.name = "Kernel data",
.start = 0,
.end = 0,
.flags = IORESOURCE_MEM,
},
};
#define kernel_code mem_res[0]
#define kernel_data mem_res[1]
/*
* Early framebuffer allocation. Works as follows:
* - If fbmem_size is zero, nothing will be allocated or reserved.
* - If fbmem_start is zero when setup_bootmem() is called,
* fbmem_size bytes will be allocated from the bootmem allocator.
* - If fbmem_start is nonzero, an area of size fbmem_size will be
* reserved at the physical address fbmem_start if necessary. If
* the area isn't in a memory region known to the kernel, it will
* be left alone.
*
* Board-specific code may use these variables to set up platform data
* for the framebuffer driver if fbmem_size is nonzero.
*/
static unsigned long __initdata fbmem_start;
static unsigned long __initdata fbmem_size;
/*
* "fbmem=xxx[kKmM]" allocates the specified amount of boot memory for
* use as framebuffer.
*
* "fbmem=xxx[kKmM]@yyy[kKmM]" defines a memory region of size xxx and
* starting at yyy to be reserved for use as framebuffer.
*
* The kernel won't verify that the memory region starting at yyy
* actually contains usable RAM.
*/
static int __init early_parse_fbmem(char *p)
{
fbmem_size = memparse(p, &p);
if (*p == '@')
fbmem_start = memparse(p, &p);
return 0;
}
early_param("fbmem", early_parse_fbmem);
static inline void __init resource_init(void)
{
struct tag_mem_range *region;
kernel_code.start = __pa(init_mm.start_code);
kernel_code.end = __pa(init_mm.end_code - 1);
kernel_data.start = __pa(init_mm.end_code);
kernel_data.end = __pa(init_mm.brk - 1);
for (region = mem_phys; region; region = region->next) {
struct resource *res;
unsigned long phys_start, phys_end;
if (region->size == 0)
continue;
phys_start = region->addr;
phys_end = phys_start + region->size - 1;
res = alloc_bootmem_low(sizeof(*res));
res->name = "System RAM";
res->start = phys_start;
res->end = phys_end;
res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
request_resource (&iomem_resource, res);
if (kernel_code.start >= res->start &&
kernel_code.end <= res->end)
request_resource (res, &kernel_code);
if (kernel_data.start >= res->start &&
kernel_data.end <= res->end)
request_resource (res, &kernel_data);
}
}
static int __init parse_tag_core(struct tag *tag)
{
if (tag->hdr.size > 2) {
if ((tag->u.core.flags & 1) == 0)
root_mountflags &= ~MS_RDONLY;
ROOT_DEV = new_decode_dev(tag->u.core.rootdev);
}
return 0;
}
__tagtable(ATAG_CORE, parse_tag_core);
static int __init parse_tag_mem_range(struct tag *tag,
struct tag_mem_range **root)
{
struct tag_mem_range *cur, **pprev;
struct tag_mem_range *new;
/*
* Ignore zero-sized entries. If we're running standalone, the
* SDRAM code may emit such entries if something goes
* wrong...
*/
if (tag->u.mem_range.size == 0)
return 0;
/*
* Copy the data so the bootmem init code doesn't need to care
* about it.
*/
if (mem_range_next_free >=
(sizeof(mem_range_cache) / sizeof(mem_range_cache[0])))
panic("Physical memory map too complex!\n");
new = &mem_range_cache[mem_range_next_free++];
*new = tag->u.mem_range;
pprev = root;
cur = *root;
while (cur) {
pprev = &cur->next;
cur = cur->next;
}
*pprev = new;
new->next = NULL;
return 0;
}
static int __init parse_tag_mem(struct tag *tag)
{
return parse_tag_mem_range(tag, &mem_phys);
}
__tagtable(ATAG_MEM, parse_tag_mem);
static int __init parse_tag_cmdline(struct tag *tag)
{
strlcpy(saved_command_line, tag->u.cmdline.cmdline, COMMAND_LINE_SIZE);
return 0;
}
__tagtable(ATAG_CMDLINE, parse_tag_cmdline);
static int __init parse_tag_rdimg(struct tag *tag)
{
return parse_tag_mem_range(tag, &mem_ramdisk);
}
__tagtable(ATAG_RDIMG, parse_tag_rdimg);
static int __init parse_tag_clock(struct tag *tag)
{
/*
* We'll figure out the clocks by peeking at the system
* manager regs directly.
*/
return 0;
}
__tagtable(ATAG_CLOCK, parse_tag_clock);
static int __init parse_tag_rsvd_mem(struct tag *tag)
{
return parse_tag_mem_range(tag, &mem_reserved);
}
__tagtable(ATAG_RSVD_MEM, parse_tag_rsvd_mem);
static int __init parse_tag_ethernet(struct tag *tag)
{
#if 0
const struct platform_device *pdev;
/*
* We really need a bus type that supports "classes"...this
* will do for now (until we must handle other kinds of
* ethernet controllers)
*/
pdev = platform_get_device("macb", tag->u.ethernet.mac_index);
if (pdev && pdev->dev.platform_data) {
struct eth_platform_data *data = pdev->dev.platform_data;
data->valid = 1;
data->mii_phy_addr = tag->u.ethernet.mii_phy_addr;
memcpy(data->hw_addr, tag->u.ethernet.hw_address,
sizeof(data->hw_addr));
}
#endif
return 0;
}
__tagtable(ATAG_ETHERNET, parse_tag_ethernet);
/*
* Scan the tag table for this tag, and call its parse function. The
* tag table is built by the linker from all the __tagtable
* declarations.
*/
static int __init parse_tag(struct tag *tag)
{
extern struct tagtable __tagtable_begin, __tagtable_end;
struct tagtable *t;
for (t = &__tagtable_begin; t < &__tagtable_end; t++)
if (tag->hdr.tag == t->tag) {
t->parse(tag);
break;
}
return t < &__tagtable_end;
}
/*
* Parse all tags in the list we got from the boot loader
*/
static void __init parse_tags(struct tag *t)
{
for (; t->hdr.tag != ATAG_NONE; t = tag_next(t))
if (!parse_tag(t))
printk(KERN_WARNING
"Ignoring unrecognised tag 0x%08x\n",
t->hdr.tag);
}
void __init setup_arch (char **cmdline_p)
{
struct clk *cpu_clk;
parse_tags(bootloader_tags);
setup_processor();
setup_platform();
cpu_clk = clk_get(NULL, "cpu");
if (IS_ERR(cpu_clk)) {
printk(KERN_WARNING "Warning: Unable to get CPU clock\n");
} else {
unsigned long cpu_hz = clk_get_rate(cpu_clk);
/*
* Well, duh, but it's probably a good idea to
* increment the use count.
*/
clk_enable(cpu_clk);
boot_cpu_data.clk = cpu_clk;
boot_cpu_data.loops_per_jiffy = cpu_hz * 4;
printk("CPU: Running at %lu.%03lu MHz\n",
((cpu_hz + 500) / 1000) / 1000,
((cpu_hz + 500) / 1000) % 1000);
}
init_mm.start_code = (unsigned long) &_text;
init_mm.end_code = (unsigned long) &_etext;
init_mm.end_data = (unsigned long) &_edata;
init_mm.brk = (unsigned long) &_end;
strlcpy(command_line, saved_command_line, COMMAND_LINE_SIZE);
*cmdline_p = command_line;
parse_early_param();
setup_bootmem();
board_setup_fbmem(fbmem_start, fbmem_size);
#ifdef CONFIG_VT
conswitchp = &dummy_con;
#endif
paging_init();
resource_init();
}
/*
* Copyright (C) 2004-2006 Atmel Corporation
*
* Based on linux/arch/sh/kernel/signal.c
* Copyright (C) 1999, 2000 Niibe Yutaka & Kaz Kojima
* Copyright (C) 1991, 1992 Linus Torvalds
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/errno.h>
#include <linux/ptrace.h>
#include <linux/unistd.h>
#include <linux/suspend.h>
#include <asm/uaccess.h>
#include <asm/ucontext.h>
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
asmlinkage int sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss,
struct pt_regs *regs)
{
return do_sigaltstack(uss, uoss, regs->sp);
}
struct rt_sigframe
{
struct siginfo info;
struct ucontext uc;
unsigned long retcode;
};
static int
restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc)
{
int err = 0;
#define COPY(x) err |= __get_user(regs->x, &sc->x)
COPY(sr);
COPY(pc);
COPY(lr);
COPY(sp);
COPY(r12);
COPY(r11);
COPY(r10);
COPY(r9);
COPY(r8);
COPY(r7);
COPY(r6);
COPY(r5);
COPY(r4);
COPY(r3);
COPY(r2);
COPY(r1);
COPY(r0);
#undef COPY
/*
* Don't allow anyone to pretend they're running in supervisor
* mode or something...
*/
err |= !valid_user_regs(regs);
return err;
}
asmlinkage int sys_rt_sigreturn(struct pt_regs *regs)
{
struct rt_sigframe __user *frame;
sigset_t set;
frame = (struct rt_sigframe __user *)regs->sp;
pr_debug("SIG return: frame = %p\n", frame);
if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
goto badframe;
if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
goto badframe;
sigdelsetmask(&set, ~_BLOCKABLE);
spin_lock_irq(&current->sighand->siglock);
current->blocked = set;
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
if (restore_sigcontext(regs, &frame->uc.uc_mcontext))
goto badframe;
pr_debug("Context restored: pc = %08lx, lr = %08lx, sp = %08lx\n",
regs->pc, regs->lr, regs->sp);
return regs->r12;
badframe:
force_sig(SIGSEGV, current);
return 0;
}
static int
setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs)
{
int err = 0;
#define COPY(x) err |= __put_user(regs->x, &sc->x)
COPY(sr);
COPY(pc);
COPY(lr);
COPY(sp);
COPY(r12);
COPY(r11);
COPY(r10);
COPY(r9);
COPY(r8);
COPY(r7);
COPY(r6);
COPY(r5);
COPY(r4);
COPY(r3);
COPY(r2);
COPY(r1);
COPY(r0);
#undef COPY
return err;
}
static inline void __user *
get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, int framesize)
{
unsigned long sp = regs->sp;
if ((ka->sa.sa_flags & SA_ONSTACK) && !sas_ss_flags(sp))
sp = current->sas_ss_sp + current->sas_ss_size;
return (void __user *)((sp - framesize) & ~3);
}
static int
setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs *regs)
{
struct rt_sigframe __user *frame;
int err = 0;
frame = get_sigframe(ka, regs, sizeof(*frame));
err = -EFAULT;
if (!access_ok(VERIFY_WRITE, frame, sizeof (*frame)))
goto out;
/*
* Set up the return code:
*
* mov r8, __NR_rt_sigreturn
* scall
*
* Note: This will blow up since we're using a non-executable
* stack. Better use SA_RESTORER.
*/
#if __NR_rt_sigreturn > 127
# error __NR_rt_sigreturn must be < 127 to fit in a short mov
#endif
err = __put_user(0x3008d733 | (__NR_rt_sigreturn << 20),
&frame->retcode);
err |= copy_siginfo_to_user(&frame->info, info);
/* Set up the ucontext */
err |= __put_user(0, &frame->uc.uc_flags);
err |= __put_user(NULL, &frame->uc.uc_link);
err |= __put_user((void __user *)current->sas_ss_sp,
&frame->uc.uc_stack.ss_sp);
err |= __put_user(sas_ss_flags(regs->sp),
&frame->uc.uc_stack.ss_flags);
err |= __put_user(current->sas_ss_size,
&frame->uc.uc_stack.ss_size);
err |= setup_sigcontext(&frame->uc.uc_mcontext, regs);
err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
if (err)
goto out;
regs->r12 = sig;
regs->r11 = (unsigned long) &frame->info;
regs->r10 = (unsigned long) &frame->uc;
regs->sp = (unsigned long) frame;
if (ka->sa.sa_flags & SA_RESTORER)
regs->lr = (unsigned long)ka->sa.sa_restorer;
else {
printk(KERN_NOTICE "[%s:%d] did not set SA_RESTORER\n",
current->comm, current->pid);
regs->lr = (unsigned long) &frame->retcode;
}
pr_debug("SIG deliver [%s:%d]: sig=%d sp=0x%lx pc=0x%lx->0x%p lr=0x%lx\n",
current->comm, current->pid, sig, regs->sp,
regs->pc, ka->sa.sa_handler, regs->lr);
regs->pc = (unsigned long) ka->sa.sa_handler;
out:
return err;
}
static inline void restart_syscall(struct pt_regs *regs)
{
if (regs->r12 == -ERESTART_RESTARTBLOCK)
regs->r8 = __NR_restart_syscall;
else
regs->r12 = regs->r12_orig;
regs->pc -= 2;
}
static inline void
handle_signal(unsigned long sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *oldset, struct pt_regs *regs, int syscall)
{
int ret;
/*
* Set up the stack frame
*/
ret = setup_rt_frame(sig, ka, info, oldset, regs);
/*
* Check that the resulting registers are sane
*/
ret |= !valid_user_regs(regs);
/*
* Block the signal if we were unsuccessful.
*/
if (ret != 0 || !(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked, &current->blocked,
&ka->sa.sa_mask);
sigaddset(&current->blocked, sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
if (ret == 0)
return;
force_sigsegv(sig, current);
}
/*
* Note that 'init' is a special process: it doesn't get signals it
* doesn't want to handle. Thus you cannot kill init even with a
* SIGKILL even by mistake.
*/
int do_signal(struct pt_regs *regs, sigset_t *oldset, int syscall)
{
siginfo_t info;
int signr;
struct k_sigaction ka;
/*
* We want the common case to go fast, which is why we may in
* certain cases get here from kernel mode. Just return
* without doing anything if so.
*/
if (!user_mode(regs))
return 0;
if (try_to_freeze()) {
signr = 0;
if (!signal_pending(current))
goto no_signal;
}
if (test_thread_flag(TIF_RESTORE_SIGMASK))
oldset = &current->saved_sigmask;
else if (!oldset)
oldset = &current->blocked;
signr = get_signal_to_deliver(&info, &ka, regs, NULL);
no_signal:
if (syscall) {
switch (regs->r12) {
case -ERESTART_RESTARTBLOCK:
case -ERESTARTNOHAND:
if (signr > 0) {
regs->r12 = -EINTR;
break;
}
/* fall through */
case -ERESTARTSYS:
if (signr > 0 && !(ka.sa.sa_flags & SA_RESTART)) {
regs->r12 = -EINTR;
break;
}
/* fall through */
case -ERESTARTNOINTR:
restart_syscall(regs);
}
}
if (signr == 0) {
/* No signal to deliver -- put the saved sigmask back */
if (test_thread_flag(TIF_RESTORE_SIGMASK)) {
clear_thread_flag(TIF_RESTORE_SIGMASK);
sigprocmask(SIG_SETMASK, &current->saved_sigmask, NULL);
}
return 0;
}
handle_signal(signr, &ka, &info, oldset, regs, syscall);
return 1;
}
asmlinkage void do_notify_resume(struct pt_regs *regs, struct thread_info *ti)
{
int syscall = 0;
if ((sysreg_read(SR) & MODE_MASK) == MODE_SUPERVISOR)
syscall = 1;
if (ti->flags & (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK))
do_signal(regs, &current->blocked, syscall);
}
/*
* Copyright (C) 2004-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <asm/sysreg.h>
.text
.global __switch_to
.type __switch_to, @function
/* Switch thread context from "prev" to "next", returning "last"
* r12 : prev
* r11 : &prev->thread + 1
* r10 : &next->thread
*/
__switch_to:
stm --r11, r0,r1,r2,r3,r4,r5,r6,r7,sp,lr
mfsr r9, SYSREG_SR
st.w --r11, r9
ld.w r8, r10++
/*
* schedule() may have been called from a mode with a different
* set of registers. Make sure we don't lose anything here.
*/
pushm r10,r12
mtsr SYSREG_SR, r8
frs /* flush the return stack */
sub pc, -2 /* flush the pipeline */
popm r10,r12
ldm r10++, r0,r1,r2,r3,r4,r5,r6,r7,sp,pc
.size __switch_to, . - __switch_to
/*
* Copyright (C) 2004-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/file.h>
#include <linux/mm.h>
#include <linux/unistd.h>
#include <asm/mman.h>
#include <asm/uaccess.h>
asmlinkage int sys_pipe(unsigned long __user *filedes)
{
int fd[2];
int error;
error = do_pipe(fd);
if (!error) {
if (copy_to_user(filedes, fd, sizeof(fd)))
error = -EFAULT;
}
return error;
}
asmlinkage long sys_mmap2(unsigned long addr, unsigned long len,
unsigned long prot, unsigned long flags,
unsigned long fd, off_t offset)
{
int error = -EBADF;
struct file *file = NULL;
flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
if (!(flags & MAP_ANONYMOUS)) {
file = fget(fd);
if (!file)
return error;
}
down_write(&current->mm->mmap_sem);
error = do_mmap_pgoff(file, addr, len, prot, flags, offset);
up_write(&current->mm->mmap_sem);
if (file)
fput(file);
return error;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
#
# Makefile for AVR32-specific library files
#
lib-y := copy_user.o clear_user.o
lib-y += strncpy_from_user.o strnlen_user.o
lib-y += delay.o memset.o memcpy.o findbit.o
lib-y += csum_partial.o csum_partial_copy_generic.o
lib-y += io-readsw.o io-readsl.o io-writesw.o io-writesl.o
lib-y += __avr32_lsl64.o __avr32_lsr64.o __avr32_asr64.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*
* Copyright (C) 2004-2006 Atmel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
.global __raw_readsl
.type __raw_readsl,@function
__raw_readsl:
cp.w r10, 0
reteq r12
/*
* If r11 isn't properly aligned, we might get an exception on
* some implementations. But there's not much we can do about it.
*/
1: ld.w r8, r12[0]
sub r10, 1
st.w r11++, r8
brne 1b
retal r12
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
obj-y += at32ap.o clock.o pio.o intc.o extint.o hsmc.o
obj-$(CONFIG_CPU_AT32AP7000) += at32ap7000.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment