- 30 Dec, 2008 1 commit
-
-
Anton Vorontsov authored
This is needed to not bother with ugly #ifdefs in the drivers. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
-
- 28 Dec, 2008 2 commits
-
-
Ilya Yanok authored
This adds support for 16k and 64k page sizes on PowerPC 44x processors. The PGDIR table is much smaller than a page when using 16k or 64k pages (512 and 32 bytes respectively) so we allocate the PGDIR with kzalloc() instead of __get_free_pages(). One PTE table covers rather a large memory area when using 16k or 64k pages (32MB or 512MB respectively), so we can easily put FIXMAP and PKMAP in the area covered by one PTE table. Signed-off-by: Yuri Tikhonov <yur@emcraft.com> Signed-off-by: Vladimir Panfilov <pvr@emcraft.com> Signed-off-by: Ilya Yanok <yanok@emcraft.com> Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Hollis Blanchard authored
Ensure that total memory size is page-aligned, because otherwise mark_bootmem() gets upset. This error case was triggered by using 64 KiB pages in the kernel while arch/powerpc/boot/4xx.c arbitrarily reduced the amount of memory by 4096 (to work around a chip bug that affects the last 256 bytes of physical memory). Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
- 23 Dec, 2008 11 commits
-
-
Dale Farnsworth authored
Wire up the trampoline code for ppc32 to relay exceptions from the vectors at address 0 to vectors at address 32MB, and modify Kconfig to enable Kdump support for all classic powerpcs. Signed-off-by: Dale Farnsworth <dale@farnsworth.org> Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Dale Farnsworth authored
Add the ability for a classic ppc kernel to be loaded at an address of 32MB. This done by fixing a few places that assume we are loaded at address 0, and by changing several uses of KERNELBASE to use PAGE_OFFSET, instead. Signed-off-by: Dale Farnsworth <dale@farnsworth.org> Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Anton Vorontsov authored
While for debugging it is good to catch bogus users of ioremap, though for kdump support it is more convenient to use __ioremap for copy_oldmem_page() (exactly as we do for PPC64 currently). Note that copy_oldmem_page() calls __ioremap with flags set to '0', so it should be safe with the regard to the caches. The other option is to use kmap_atomic_pfn()[1], but it will not work for kernels compiled without HIGHMEM. That is, on a board with 256MB RAM and crashkernel=64M@32M case, the !HIGHMEM capturing kernel maps 0-96M range, which does not include all the memory needed to capture the dump. And, obviously, accessing anything upper than 96M will cause faults. [1] http://ozlabs.org/pipermail/linuxppc-dev/2007-November/046747.htmlSigned-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Dale Farnsworth authored
Refactor the setting of kdump OF properties, moving the common code from machine_kexec_64.c to machine_kexec.c where it can be used on both ppc64 and ppc32. This will be needed for kdump to work on ppc32 platforms. Signed-off-by: Dale Farnsworth <dale@farnsworth.org> Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Anton Vorontsov authored
This replaces the dummy crash_setup_regs function with full-fledged crash_setup_regs implementation. On PPC32 we simply use the new ppc_save_regs function to dump the registers. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Anton Vorontsov authored
Today the arch/powerpc/xmon/setjmp.S file contains only the xmon_save_regs function. We want to use it for kdump purposes, so let's move the file into arch/powerpc/kernel/ and give the function a more generic name (ppc_save_regs). Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Anton Vorontsov authored
Default ops are implicit now. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Anton Vorontsov authored
This removes the need for each platform to specify default kexec and crash kernel ops, thus effectively adds a working kexec support for most 6xx/7xx/7xxx-based boards. Platforms that can't cope with default ops will explode in some weird way (a hang or reboot is most likely), which means that the board's kexec support should be fixed or blacklisted via dummy _prepare callback returning -ENOSYS. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Dale Farnsworth authored
Refactor the setting of kexec OF properties, moving the common code from machine_kexec_64.c to machine_kexec.c where it can be used on both ppc64 and ppc32. This is needed for kexec to work on ppc32 platforms. Signed-off-by: Dale Farnsworth <dale@farnsworth.org> Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Sebastien Dugue authored
Currently, pseries_cpu_die() calls msleep() while polling RTAS for the status of the dying cpu. However, if the cpu that is going down also happens to be the one doing the tick then we're hosed as the tick_do_timer_cpu 'baton' is only passed later on in tick_shutdown() when _cpu_down() does the CPU_DEAD notification. Therefore jiffies won't be updated anymore. This replaces that msleep() with a cpu_relax() to make sure we're not going to schedule at that point. With this patch my test box survives a 100k iterations hotplug stress test on _all_ cpus, whereas without it, it quickly dies after ~50 iterations. Signed-off-by: Sebastien Dugue <sebastien.dugue@bull.net> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Paul Mackerras authored
Commit 2a4aca11 ("powerpc/mm: Split low level tlb invalidate for nohash processors") changed a call to _tlbia to _tlbil_all but didn't include the header that defines _tlbil_all, leading to a build failure on 440 if KVM is enabled. This fixes it. Signed-off-by: Paul Mackerras <paulus@samba.org>
-
- 22 Dec, 2008 2 commits
-
-
Benjamin Krill authored
Since the QPACE (Chromodynamics Parallel Computing on the Cell Broadband Engine) platform doesn't use a iommu, doesn't have PCI devices and a MPIC much lesser setup and configurations are needed. So far all devices are detected as OF device. A notifier function is used to set the dma_ops for the of_platform bus. Further this patch splits the PPC_CELL_NATIVE into PPC_CELL_COMMON which are parts that are shared with the QPACE platform and the rest. Signed-off-by: Benjamin Krill <ben@codiert.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
-
Arnd Bergmann authored
CBE_THERM and OPROFILE_CELL both cannot be built without SPU_FS disabled, so make the dependency explicit. Reported-by: Milton Miller <miltonm@bga.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
-
- 21 Dec, 2008 24 commits
-
-
Wolfram Sang authored
- error cases for mapbase and irq were unbundled - mapped irq now gets disposed on error Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Wolfram Sang authored
Add RTS/CTS-support for the PSC of the MPC5200B. Tested with a Phytec MPC5200B-IO. Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
René Bürgel authored
This patch adds the capability to the mpc52xx-uart to report framing errors, parity errors, breaks and overruns to userspace. These values may be requested in userspace by using the ioctl TIOCGICOUNT. Signed-off-by: René Bürgel <r.buergel@unicontrol.de> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Wolfram Sang authored
As this driver polls for a complete MDIO transaction, there is no need to enable interrupts for it. Furthermore, make both checks for freeing MDIO-bus irqs consistent. Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Tim Yamin authored
This patch adds MDMA/UDMA support using BestComm for DMA on the MPC5200 platform. Based heavily on previous work by Freescale (Bernard Kuhn, John Rigby) and Domen Puncer. With this patch, a SanDisk Extreme IV CF card gets read speeds of approximately 26.70 MB/sec. Signed-off-by: Tim Yamin <plasm@roo.me.uk> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Grant Likely authored
When ATA DMA is enabled, bestcomm prefetching does not work. This patch adds a function to disable bestcomm prefetch when the ATA Bestcomm task is initialized. Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Tim Yamin authored
1) ata.h has dst_pa in the wrong place (needs to match what the BestComm task microcode in bcom_ata_task.c expects); fix it. 2) The BestComm ATA task priority was changed to maximum in bestcomm_priv.h; this fixes a deadlock issue experienced with heavy DMA occurring on both the ATA and Ethernet BestComm tasks, e.g. when downloading a large file over a LAN to disk. Signed-off-by: Tim Yamin <plasm@roo.me.uk> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Grant Likely authored
The buffer descriptors for the ATA BestComm task are larger than the current definition for bcom_bd. This causes problems because the various bcom_... functions dereference the buffer descriptor pointer by using the array operator which doesn't work when the buffer descriptors are a different size. This patch adds the bcom_get_bd() function which uses the value in bcom_task.bd_size to calculate the offset into the BD table. This patch also changes the definition of bcom_bd to specify a data size of 0 instead of 1 so that it will never work if anyone attempts to dereference the bd list as an array (as opposed to something that might work even though it is wrong). Finally, this patch moves the definition of bcom_bd up in the file to eliminate a forward declaration. Based on patch originally written by Tim Yamin. Signed-off-by: Tim Yamin <plasm@roo.me.uk> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Grant Likely authored
The MPC5200 internal interrupt controller setup function needs to set the default interrupt controller when it is called. Without this irq_create_of_mapping() cannot be called without first determining the pointer to the irq controller (ie. call with controller = NULL). Reported-by: Steven Cavanagh <scavanagh@secretlab.ca> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Grant Likely authored
This patch adds documentation to the mpc5200 interrupt controller driver and cleans up some minor coding conventions. It also moves the contents of mpc52xx_pic.h into the driver proper (except for a small common bit that is moved to the common mpc52xx.h) because the information encoded there is not required by any other part of kernel code. Finally for code readability sake, the L2_OFFSET shift value is removed because the code using it resolves to a noop. Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Benjamin Herrenschmidt authored
Rework to MMU code dropped a much missed 'blr' instruction. Brown-Paper-Bag-Worn-By: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Scott Wood authored
The correct #address-cells was still used for the actual translation, so the impact is only a possibility of choosing the wrong range entry or failing to find any match. Most common cases were not affected. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Grant Erickson authored
Add const qualifier to device_node argument for dcr_resource_{start,len} as of_get_property also const-qualifies this argument. Signed-off-by: Grant Erickson <gerickson@nuovations.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
After discussing with chip designers, it appears that it's not necessary to set G everywhere on 440 cores. The various core errata related to prefetch should be sorted out by firmware by disabling icache prefetching in CCR0. We add the workaround to the kernel however just in case oooold firmwares don't do it. This is valid for -all- 4xx core variants. Later ones hard wire the absence of prefetch but it doesn't harm to clear the bits in CCR0 (they should already be cleared anyway). We still leave G=1 on the linear mapping for now, we need to stop over-mapping RAM to be able to remove it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
Currently, we never set _PAGE_COHERENT in the PTEs, we just OR it in in the hash code based on some CPU feature bit. We also manipulate _PAGE_NO_CACHE and _PAGE_GUARDED by hand in all sorts of places. This changes the logic so that instead, the PTE now contains _PAGE_COHERENT for all normal RAM pages thay have I = 0 on platforms that need it. The hash code clears it if the feature bit is not set. It also adds some clean accessors to setup various valid combinations of access flags and change various bits of code to use them instead. This should help having the PTE actually containing the bit combinations that we really want. I also removed _PAGE_GUARDED from _PAGE_BASE on 44x and instead set it explicitely from the TLB miss. I will ultimately remove it completely as it appears that it might not be needed after all but in the meantime, having it in the TLB miss makes things a lot easier. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
This makes the MMU context code used for CPUs with no hash table (except 603) dynamically allocate the various maps used to track the state of contexts. Only the main free map and CPU 0 stale map are allocated at boot time. Other CPU maps are allocated when those CPUs are brought up and freed if they are unplugged. This also moves the initialization of the MMU context management slightly later during the boot process, which should be fine as it's really only needed when userland if first started anyways. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
The handlers for Critical, Machine Check or Debug interrupts will save and restore MMUCR nowadays, thus we only need to disable normal interrupts when invalidating TLB entries. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
Currently, the various forms of low level TLB invalidations are all implemented in misc_32.S for 32-bit processors, in a fairly scary mess of #ifdef's and with interesting duplication such as a whole bunch of code for FSL _tlbie and _tlbia which are no longer used. This moves things around such that _tlbie is now defined in hash_low_32.S and is only used by the 32-bit hash code, and all nohash CPUs use the various _tlbil_* forms that are now moved to a new file, tlb_nohash_low.S. I moved all the definitions for that stuff out of include/asm/tlbflush.h as they are really internal mm stuff, into mm/mmu_decl.h The code should have no functional changes. I kept some variants inline for trivial forms on things like 40x and 8xx. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
This commit moves the whole no-hash TLB handling out of line into a new tlb_nohash.c file, and implements some basic SMP support using IPIs and/or broadcast tlbivax instructions. Note that I'm using local invalidations for D->I cache coherency. At worst, if another processor is trying to execute the same and has the old entry in its TLB, it will just take a fault and re-do the TLB flush locally (it won't re-do the cache flush in any case). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
We're soon running out of CPU features and I need to add some new ones for various MMU related bits, so this patch separates the MMU features from the CPU features. I moved over the 32-bit MMU related ones, added base features for MMU type families, but didn't move over any 64-bit only feature yet. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
This reworks the context management code used by 4xx,8xx and freescale BookE. It adds support for SMP by implementing a concept of stale context map to lazily flush the TLB on processors where a context may have been invalidated. This also contains the ground work for generalizing such lazy TLB flushing by just picking up a new PID and marking the old one stale. This will be implemented later. This is a first implementation that uses a global spinlock. Ideally, we should try to get at least the fast path (context ID already assigned) lockless or limited to a per context lock, but for now this will do. I tried to keep the UP case reasonably simple to avoid adding too much overhead to 8xx which does a lot of context stealing since it effectively has only 16 PIDs available. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
This splits the mmu_context handling between 32-bit hash based processors, 64-bit hash based processors and everybody else. This is preliminary work for adding SMP support for BookE processors. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
This adds supports to the "extended" DCR addressing via the indirect mfdcrx/mtdcrx instructions supported by some 4xx cores (440H6 and later). I enabled the feature for now only on AMCC 460 chips. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
-
Brian King authored
When running Active Memory Sharing, pages can get marked as "loaned" with the hypervisor by the CMM driver. This state gets cleared by the system firmware when rebooting the partition. When using kexec to boot a new kernel, this state never gets cleared and the hypervisor and CMM driver can get out of sync with respect to the number of pages currently marked "loaned". Fix this by adding a reboot notifier to the CMM driver to deflate the balloon and mark all pages as active. Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
-