Discussion:
Can I/OAT DMA engineer access PCI MMIO space
康剑斌
2011-04-29 10:13:04 UTC
Permalink
I try to use ioatdam to copy data from system memory to pci MMIO space:

=======================
void mmio_copy(void *dst, void *src, size_t len)
struct async_submit_ctl submit;
struct memcpy_context ctx;

atomic_set(&ctx.cnt, 1);
init_completion(&ctx.cmp);
init_async_submit(&submit, ASYNC_TX_ACK, NULL,
done_memcpy, &ctx, NULL);

while (len) {
size_t clen = (len > PAGE_SIZE) ? PAGE_SIZE : len;
struct page *p_dst, *p_src;

p_dst = virt_to_page(dst);
p_src = virt_to_page(src);

atomic_inc(&ctx.cnt);

ntb_async_memcpy(p_dst, p_src, 0, 0, clen, &submit);

dst += clen;
src += clen;
len -= clen;
}

if (atomic_dec_and_test(&ctx.cnt))
return;

async_tx_issue_pending_all();
wait_for_completion(&ctx.cmp);
=================

If dst points to a memory space, the operation would pass.
But if dst points to a pci MMIO space, it failed with kernel oops.
It seems the code:
BUG_ON(is_ioat_bug(chanerr));
in drivers/dma/ioat/dma_v3.c, line 365 cause the oops.
Is there anyway to access pci MMIO space using ioat?
The datasheet says that ioat supports MMIO access.
Koul, Vinod
2011-05-02 06:04:20 UTC
Permalink
I try to use ioatdam to copy data from system memory to pci MMIO spac=
If dst points to a memory space, the operation would pass.
But if dst points to a pci MMIO space, it failed with kernel oops.
BUG_ON(is_ioat_bug(chanerr));
in drivers/dma/ioat/dma_v3.c, line 365 cause the oops.
Is there anyway to access pci MMIO space using ioat?
The datasheet says that ioat supports MMIO access.
Did you map the IO memory in kernel using ioremap and friends first?

--=20
~Vinod
康剑斌
2011-05-03 02:21:52 UTC
Permalink
Post by Koul, Vinod
Post by 康剑斌
If dst points to a memory space, the operation would pass.
But if dst points to a pci MMIO space, it failed with kernel oops.
BUG_ON(is_ioat_bug(chanerr));
in drivers/dma/ioat/dma_v3.c, line 365 cause the oops.
Is there anyway to access pci MMIO space using ioat?
The datasheet says that ioat supports MMIO access.
Did you map the IO memory in kernel using ioremap and friends first?
yes, I had used 'ioremap_nocache' to map the IO memory and I can use
memcpy to copy data to this region. The async_tx should have been
correctly configured as
I can use aync_memcpy to copy data between different system memory address.
Koul, Vinod
2011-05-03 04:12:06 UTC
Permalink
Post by 康剑斌
I try to use ioatdam to copy data from system memory to pci MMIO s=
If dst points to a memory space, the operation would pass.
But if dst points to a pci MMIO space, it failed with kernel oops.
BUG_ON(is_ioat_bug(chanerr));
in drivers/dma/ioat/dma_v3.c, line 365 cause the oops.
Is there anyway to access pci MMIO space using ioat?
The datasheet says that ioat supports MMIO access.
Did you map the IO memory in kernel using ioremap and friends first=
?
Post by 康剑斌
yes, I had used 'ioremap_nocache' to map the IO memory and I can use
memcpy to copy data to this region. The async_tx should have been=20
correctly configured as
I can use aync_memcpy to copy data between different system memory ad=
dress.
Then you should be using memcpy_toio() and friends

~Vinod

--=20
~Vinod
康剑斌
2011-05-03 06:31:46 UTC
Permalink
Post by Koul, Vinod
Post by 康剑斌
yes, I had used 'ioremap_nocache' to map the IO memory and I can use
memcpy to copy data to this region. The async_tx should have been
correctly configured as
I can use aync_memcpy to copy data between different system memory address.
Then you should be using memcpy_toio() and friends
Do you mean that if I have mapped the mmio, I can' use I/OAT dma
transfer to this region any more?
I can use memcpy to copy data, but it consumes lots of cpu as PCI access
is too slow.
If I can use i/oat dma and asyc_tx api to do the job, the performance
should be imporved.
Thanks
康剑斌
2011-05-05 08:45:58 UTC
Permalink
=E4=BA=8E 2011=E5=B9=B405=E6=9C=8803=E6=97=A5 23:58, Dan Williams =E5=86=
Post by 康剑斌
Do you mean that if I have mapped the mmio, I can' use I/OAT dma
transfer to this region any more?
I can use memcpy to copy data, but it consumes lots of cpu as PCI ac=
cess
Post by 康剑斌
is too slow.
If I can use i/oat dma and asyc_tx api to do the job, the performanc=
e
Post by 康剑斌
should be imporved.
Thanks
The async_tx api only supports memory-to-memory transfers. To write=20
to mmio space with ioatdma you would need a custom method, like the=20
dma-slave support in other drivers, to program the descriptors with=20
the physical mmio bus address.
--=20
Dan
Thanks.
I directly read pci bar address and program it into descriptors, ioatdm=
a=20
works.
Some problem is, when PCI transfer failed (Using a NTB connect to=20
another system, and the system power down),
ioatdma will cause kernel oops.

BUG_ON(is_ioat_bug(chanerr));
in drivers/dma/ioat/dma_v3.c, line 365

It seems that HW reports a 'IOAT_CHANERR_DEST_ADDR_ERR', and drivers=20
can't recover from this situation.
What does dma-slave mean? Just like DMA_SLAVE flag existing in other DM=
A=20
drivers?
Dan Williams
2011-05-05 15:11:14 UTC
Permalink
[ adding Dave ]
Post by 康剑斌
Thanks.
I directly read pci bar address and program it into descriptors, ioat=
dma
Post by 康剑斌
works.
Some problem is, when PCI transfer failed (Using a NTB connect to
another system, and the system power down),
ioatdma will cause kernel oops.
BUG_ON(is_ioat_bug(chanerr));
in drivers/dma/ioat/dma_v3.c, line 365
It seems that HW reports a 'IOAT_CHANERR_DEST_ADDR_ERR', and drivers
can't recover from this situation.
Ah ok, this is expected with the current upstream ioatdma driver. The=20
driver assumes that all transfers are mem-to-mem (ASYNC_TX_DMA or=20
NET_DMA) and that a destination address error is a fatal error (similar=
=20
to a kernel page fault).

With NTB, where failures are expected, the driver would need to be=20
modified to expect the error, recover from it, and report it to the=20
application.
Post by 康剑斌
What does dma-slave mean? Just like DMA_SLAVE flag existing in other =
DMA
Post by 康剑斌
drivers?
Yes, DMA_SLAVE is the generic framework to associate a dma offload=20
device with an mmio peripheral.

--
Dan

Loading...