This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH v2 2.6.38-rc8-tip 3/20] 3: uprobes: Breakground page replacement.


* Thomas Gleixner <tglx@linutronix.de> [2011-03-15 14:22:09]:

> On Mon, 14 Mar 2011, Srikar Dronamraju wrote:
> > +/*
> > + * Called with tsk->mm->mmap_sem held (either for read or write and
> > + * with a reference to tsk->mm
> 
> Hmm, why is holding it for read sufficient?

We are not adding a new vma to the mm; but just replacing a page with
another after holding the locks for the pages. Existing routines
doing close to similar things like the
access_process_vm/get_user_pages seem to be taking the read_lock. Do
you see a resaon why readlock wouldnt suffice?

> .
> > + */
> > +static int write_opcode(struct task_struct *tsk, struct uprobe * uprobe,
> > +			unsigned long vaddr, uprobe_opcode_t opcode)
> > +{
> > +	struct page *old_page, *new_page;
> > +	void *vaddr_old, *vaddr_new;
> > +	struct vm_area_struct *vma;
> > +	spinlock_t *ptl;
> > +	pte_t *orig_pte;
> > +	unsigned long addr;
> > +	int ret = -EINVAL;
> 
> That initialization is pointless.
Okay, 

> 
> > +	/* Read the page with vaddr into memory */
> > +	ret = get_user_pages(tsk, tsk->mm, vaddr, 1, 1, 1, &old_page, &vma);
> > +	if (ret <= 0)
> > +		return -EINVAL;
> > +	ret = -EINVAL;
> > +
> > +	/*
> > +	 * check if the page we are interested is read-only mapped
> > +	 * Since we are interested in text pages, Our pages of interest
> > +	 * should be mapped read-only.
> > +	 */
> > +	if ((vma->vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)) ==
> > +						(VM_READ|VM_EXEC))
> > +		goto put_out;
> 
> IIRC then text pages are (VM_READ|VM_EXEC)

Steven Rostedt already pointed this out.


> 
> > +	/* Allocate a page */
> > +	new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vaddr);
> > +	if (!new_page) {
> > +		ret = -ENOMEM;
> > +		goto put_out;
> > +	}
> > +
> > +	/*
> > +	 * lock page will serialize against do_wp_page()'s
> > +	 * PageAnon() handling
> > +	 */
> > +	lock_page(old_page);
> > +	/* copy the page now that we've got it stable */
> > +	vaddr_old = kmap_atomic(old_page, KM_USER0);
> > +	vaddr_new = kmap_atomic(new_page, KM_USER1);
> > +
> > +	memcpy(vaddr_new, vaddr_old, PAGE_SIZE);
> > +	/* poke the new insn in, ASSUMES we don't cross page boundary */
> 
> And what makes sure that we don't ?

We are expecting the breakpoint instruction to be the minimum size
instruction for that architecture. This wouldnt be a problem for
architectures that have fixed length instructions.
For architectures which have variable size instructions, I am
hoping that the opcode size will be small enuf that it will always not
cross boundary. Something like 0xCC on x86. If and when we support
architectures that have variable length instructions whose
breakpoint instruction can span across page boundary, we have to add
more meat to take care of the case.

> 
> > +	addr = vaddr;
> > +	vaddr &= ~PAGE_MASK;
> > +	memcpy(vaddr_new + vaddr, &opcode, uprobe_opcode_sz);
> > +
> > +	kunmap_atomic(vaddr_new, KM_USER1);
> > +	kunmap_atomic(vaddr_old, KM_USER0);
> > +
> > +	orig_pte = page_check_address(old_page, tsk->mm, addr, &ptl, 0);
> > +	if (!orig_pte)
> > +		goto unlock_out;
> > +	pte_unmap_unlock(orig_pte, ptl);
> > +
> > +	lock_page(new_page);
> > +	if (!anon_vma_prepare(vma))
> 
> Why don't you get the error code of anon_vma_prepare()?

Okay, will capture the error_code of anon_vma_prepare.

> 
> > +		/* flip pages, do_wp_page() will fail pte_same() and bail */
> 
> -ENOPARSE
> 
> > +		ret = replace_page(vma, old_page, new_page, *orig_pte);
> > +
> > +	unlock_page(new_page);
> > +	if (ret != 0)
> > +		page_cache_release(new_page);
> > +unlock_out:
> > +	unlock_page(old_page);
> > +
> > +put_out:
> > +	put_page(old_page); /* we did a get_page in the beginning */
> > +	return ret;
> > +}
> > +
> > +/**
> > + * read_opcode - read the opcode at a given virtual address.
> > + * @tsk: the probed task.
> > + * @vaddr: the virtual address to store the opcode.
> > + * @opcode: location to store the read opcode.
> > + *
> > + * For task @tsk, read the opcode at @vaddr and store it in @opcode.
> > + * Return 0 (success) or a negative errno.
> 
> Wants to called with mmap_sem held as well, right ?

Yes, will document.

> 
> > + */
> > +int __weak read_opcode(struct task_struct *tsk, unsigned long vaddr,
> > +						uprobe_opcode_t *opcode)
> > +{
> > +	struct vm_area_struct *vma;
> > +	struct page *page;
> > +	void *vaddr_new;
> > +	int ret;
> > +
> > +	ret = get_user_pages(tsk, tsk->mm, vaddr, 1, 0, 0, &page, &vma);
> > +	if (ret <= 0)
> > +		return -EFAULT;
> > +	ret = -EFAULT;
> > +
> > +	/*
> > +	 * check if the page we are interested is read-only mapped
> > +	 * Since we are interested in text pages, Our pages of interest
> > +	 * should be mapped read-only.
> > +	 */
> > +	if ((vma->vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)) ==
> > +						(VM_READ|VM_EXEC))
> > +		goto put_out;
> 
> Same as above
> 
> > +	lock_page(page);
> > +	vaddr_new = kmap_atomic(page, KM_USER0);
> > +	vaddr &= ~PAGE_MASK;
> > +	memcpy(&opcode, vaddr_new + vaddr, uprobe_opcode_sz);
> > +	kunmap_atomic(vaddr_new, KM_USER0);
> > +	unlock_page(page);
> > +	ret =  uprobe_opcode_sz;
> 
>   ret = 0 ?? At least, that's what the comment above says.

Has been already been pointed out by Stephen Wilson.
setting ret = 0 here.

> 
> > +
> > +put_out:
> > +	put_page(page); /* we did a get_page in the beginning */
> > +	return ret;
> > +}
> > +


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]