Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
L
linux-davinci-2.6.23
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Redmine
Redmine
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Operations
Operations
Metrics
Environments
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
linux
linux-davinci-2.6.23
Commits
33cbd30e
Commit
33cbd30e
authored
Mar 21, 2006
by
Tony Luck
Browse files
Options
Browse Files
Download
Plain Diff
Pull ia64-mutex-primitives into release branch
parents
536ea4e4
a454c2f3
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
88 additions
and
5 deletions
+88
-5
include/asm-ia64/mutex.h
include/asm-ia64/mutex.h
+88
-5
No files found.
include/asm-ia64/mutex.h
View file @
33cbd30e
/*
/*
*
Pull in the generic implementation for
the mutex fastpath.
*
ia64 implementation of
the mutex fastpath.
*
*
* TODO: implement optimized primitives instead, or leave the generic
* Copyright (C) 2006 Ken Chen <kenneth.w.chen@intel.com>
* implementation in place, or pick the atomic_xchg() based generic
*
* implementation. (see asm-generic/mutex-xchg.h for details)
*/
#ifndef _ASM_MUTEX_H
#define _ASM_MUTEX_H
/**
* __mutex_fastpath_lock - try to take the lock by moving the count
* from 1 to a 0 value
* @count: pointer of type atomic_t
* @fail_fn: function to call if the original value was not 1
*
* Change the count from 1 to a value lower than 1, and call <fail_fn> if
* it wasn't 1 originally. This function MUST leave the value lower than
* 1 even when the "1" assertion wasn't true.
*/
static
inline
void
__mutex_fastpath_lock
(
atomic_t
*
count
,
void
(
*
fail_fn
)(
atomic_t
*
))
{
if
(
unlikely
(
ia64_fetchadd4_acq
(
count
,
-
1
)
!=
1
))
fail_fn
(
count
);
}
/**
* __mutex_fastpath_lock_retval - try to take the lock by moving the count
* from 1 to a 0 value
* @count: pointer of type atomic_t
* @fail_fn: function to call if the original value was not 1
*
* Change the count from 1 to a value lower than 1, and call <fail_fn> if
* it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
* or anything the slow path function returns.
*/
static
inline
int
__mutex_fastpath_lock_retval
(
atomic_t
*
count
,
int
(
*
fail_fn
)(
atomic_t
*
))
{
if
(
unlikely
(
ia64_fetchadd4_acq
(
count
,
-
1
)
!=
1
))
return
fail_fn
(
count
);
return
0
;
}
/**
* __mutex_fastpath_unlock - try to promote the count from 0 to 1
* @count: pointer of type atomic_t
* @fail_fn: function to call if the original value was not 0
*
* Try to promote the count from 0 to 1. If it wasn't 0, call <fail_fn>.
* In the failure case, this function is allowed to either set the value to
* 1, or to set it to a value lower than 1.
*
* If the implementation sets it to a value of lower than 1, then the
* __mutex_slowpath_needs_to_unlock() macro needs to return 1, it needs
* to return 0 otherwise.
*/
static
inline
void
__mutex_fastpath_unlock
(
atomic_t
*
count
,
void
(
*
fail_fn
)(
atomic_t
*
))
{
int
ret
=
ia64_fetchadd4_rel
(
count
,
1
);
if
(
unlikely
(
ret
<
0
))
fail_fn
(
count
);
}
#define __mutex_slowpath_needs_to_unlock() 1
/**
* __mutex_fastpath_trylock - try to acquire the mutex, without waiting
*
* @count: pointer of type atomic_t
* @fail_fn: fallback function
*
* Change the count from 1 to a value lower than 1, and return 0 (failure)
* if it wasn't 1 originally, or return 1 (success) otherwise. This function
* MUST leave the value lower than 1 even when the "1" assertion wasn't true.
* Additionally, if the value was < 0 originally, this function must not leave
* it to 0 on failure.
*
* If the architecture has no effective trylock variant, it should call the
* <fail_fn> spinlock-based trylock variant unconditionally.
*/
*/
static
inline
int
__mutex_fastpath_trylock
(
atomic_t
*
count
,
int
(
*
fail_fn
)(
atomic_t
*
))
{
if
(
likely
(
cmpxchg_acq
(
count
,
1
,
0
))
==
1
)
return
1
;
return
0
;
}
#
include <asm-generic/mutex-dec.h>
#
endif
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment