From: Jesse B. <jb...@vi...> - 2008-10-30 21:03:33
|
In the hopes of getting the kernel mode setting stuff in shape for the 2.6.29 merge window I've been trying to get things into shape. This patch set is a very rough start at getting things merge-ready. There's still a lot left to do: - document the new interfaces (basically finish the DRM developer guide I started a long time ago), should help get other drivers ported - fix i915 interrupt support, this patch set basically reverts most of the earlier mode setting changes; there are lots of fixes upstream I didn't want to lose - fix bugs, there are lots of these to go around (right now load/unload and lastclose seem to cause massive memory corruption) - test the new code with modeset=0 against old drivers These patches sit on top of my GTT mapping patches (posted earlier) since a KMS aware 2D driver really wants GTT mapping too. Also I just pushed a fix to the 2D driver that, in combination with my EXA pixmap management patches (which in turn depend on the libdrm bits), gets it limping along with kernel mode setting support. My rough list of priorities is as follows: 1) make load/unload solid (this will make subsequent development *much* easier) 2) make X startup/teardown solid (this is probably a dup of (1) at this point) 3) fixup the IRQ handling stuff, re-adding the ripped out hot plug stuff 4) minimize the diff with upstream (reverting some of the cleanups I foolishly included in the initial mode setting work) 5) change the panic/oops stuff to use oops_begin rather than the panic notifier ...and of course fix bugs as they're discovered along the way. I should have some patches early next week for our QA team and the broader community to bang on, but any feedback people have for this set can be included in the next post, so feel free to send me feedback. Thanks, -- Jesse Barnes, Intel Open Source Technology Center |
From: Jesse B. <jb...@vi...> - 2008-10-30 21:13:30
|
On Thursday, October 30, 2008 2:03 pm Jesse Barnes wrote: > In the hopes of getting the kernel mode setting stuff in shape for the > 2.6.29 merge window I've been trying to get things into shape. This patch > set is a very rough start at getting things merge-ready. There's still a > lot left to do: > - document the new interfaces (basically finish the DRM developer guide I > started a long time ago), should help get other drivers ported > - fix i915 interrupt support, this patch set basically reverts most of > the earlier mode setting changes; there are lots of fixes upstream I didn't > want to lose > - fix bugs, there are lots of these to go around (right now load/unload > and lastclose seem to cause massive memory corruption) > - test the new code with modeset=0 against old drivers > > These patches sit on top of my GTT mapping patches (posted earlier) since a > KMS aware 2D driver really wants GTT mapping too. Also I just pushed a fix > to the 2D driver that, in combination with my EXA pixmap management patches > (which in turn depend on the libdrm bits), gets it limping along with > kernel mode setting support. > > My rough list of priorities is as follows: > 1) make load/unload solid (this will make subsequent development *much* > easier) > 2) make X startup/teardown solid (this is probably a dup of (1) at this > point) > 3) fixup the IRQ handling stuff, re-adding the ripped out hot plug stuff > 4) minimize the diff with upstream (reverting some of the cleanups I > foolishly included in the initial mode setting work) > 5) change the panic/oops stuff to use oops_begin rather than the panic > notifier > ...and of course fix bugs as they're discovered along the way. I should > have some patches early next week for our QA team and the broader community > to bang on, but any feedback people have for this set can be included in > the next post, so feel free to send me feedback. Wow they turned out to be fairly huge. Definitely need to be split up some more. In the likely event that the mailing list fails to deliver the 3 patches in reply to this message, you can grab them from kernel.org: http://www.kernel.org/pub/linux/kernel/people/jbarnes/patches/drm-kms. Thanks, Jesse |
From: Jesse B. <jb...@vi...> - 2008-10-30 21:10:19
|
Adds support to the radeon DRM driver for DRM based mode setting. This is just for completeness. This patch is already out of date with respect to what's in modesetting-gem. diff --git a/drivers/gpu/drm/radeon/Makefile b/drivers/gpu/drm/radeon/Makefile index feb521e..5d6bc6c 100644 --- a/drivers/gpu/drm/radeon/Makefile +++ b/drivers/gpu/drm/radeon/Makefile @@ -3,7 +3,11 @@ # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher. ccflags-y := -Iinclude/drm -radeon-y := radeon_drv.o radeon_cp.o radeon_state.o radeon_mem.o radeon_irq.o r300_cmdbuf.o +radeon-y := radeon_drv.o radeon_cp.o radeon_state.o radeon_mem.o radeon_irq.o r300_cmdbuf.o \ + radeon_gem.o radeon_buffer.o radeon_fence.o radeon_cs.o \ + radeon_i2c.o radeon_fb.o radeon_encoders.o radeon_connectors.o radeon_display.o \ + atombios_crtc.o atom.o radeon_atombios.o radeon_combios.o radeon_legacy_crtc.o \ + radeon_legacy_encoders.o radeon_cursor.o radeon_pm.o radeon_gem_proc.o radeon-$(CONFIG_COMPAT) += radeon_ioc32.o diff --git a/drivers/gpu/drm/radeon/ObjectID.h b/drivers/gpu/drm/radeon/ObjectID.h new file mode 100644 index 0000000..4b106cf --- /dev/null +++ b/drivers/gpu/drm/radeon/ObjectID.h @@ -0,0 +1,484 @@ +/* +* Copyright 2006-2007 Advanced Micro Devices, Inc. +* +* Permission is hereby granted, free of charge, to any person obtaining a +* copy of this software and associated documentation files (the "Software"), +* to deal in the Software without restriction, including without limitation +* the rights to use, copy, modify, merge, publish, distribute, sublicense, +* and/or sell copies of the Software, and to permit persons to whom the +* Software is furnished to do so, subject to the following conditions: +* +* The above copyright notice and this permission notice shall be included in +* all copies or substantial portions of the Software. +* +* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL +* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR +* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, +* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR +* OTHER DEALINGS IN THE SOFTWARE. +*/ +/* based on stg/asic_reg/drivers/inc/asic_reg/ObjectID.h ver 23 */ + +#ifndef _OBJECTID_H +#define _OBJECTID_H + +#if defined(_X86_) +#pragma pack(1) +#endif + +/****************************************************/ +/* Graphics Object Type Definition */ +/****************************************************/ +#define GRAPH_OBJECT_TYPE_NONE 0x0 +#define GRAPH_OBJECT_TYPE_GPU 0x1 +#define GRAPH_OBJECT_TYPE_ENCODER 0x2 +#define GRAPH_OBJECT_TYPE_CONNECTOR 0x3 +#define GRAPH_OBJECT_TYPE_ROUTER 0x4 +/* deleted */ + +/****************************************************/ +/* Encoder Object ID Definition */ +/****************************************************/ +#define ENCODER_OBJECT_ID_NONE 0x00 + +/* Radeon Class Display Hardware */ +#define ENCODER_OBJECT_ID_INTERNAL_LVDS 0x01 +#define ENCODER_OBJECT_ID_INTERNAL_TMDS1 0x02 +#define ENCODER_OBJECT_ID_INTERNAL_TMDS2 0x03 +#define ENCODER_OBJECT_ID_INTERNAL_DAC1 0x04 +#define ENCODER_OBJECT_ID_INTERNAL_DAC2 0x05 /* TV/CV DAC */ +#define ENCODER_OBJECT_ID_INTERNAL_SDVOA 0x06 +#define ENCODER_OBJECT_ID_INTERNAL_SDVOB 0x07 + +/* External Third Party Encoders */ +#define ENCODER_OBJECT_ID_SI170B 0x08 +#define ENCODER_OBJECT_ID_CH7303 0x09 +#define ENCODER_OBJECT_ID_CH7301 0x0A +#define ENCODER_OBJECT_ID_INTERNAL_DVO1 0x0B /* This belongs to Radeon Class Display Hardware */ +#define ENCODER_OBJECT_ID_EXTERNAL_SDVOA 0x0C +#define ENCODER_OBJECT_ID_EXTERNAL_SDVOB 0x0D +#define ENCODER_OBJECT_ID_TITFP513 0x0E +#define ENCODER_OBJECT_ID_INTERNAL_LVTM1 0x0F /* not used for Radeon */ +#define ENCODER_OBJECT_ID_VT1623 0x10 +#define ENCODER_OBJECT_ID_HDMI_SI1930 0x11 +#define ENCODER_OBJECT_ID_HDMI_INTERNAL 0x12 +/* Kaleidoscope (KLDSCP) Class Display Hardware (internal) */ +#define ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1 0x13 +#define ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1 0x14 +#define ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1 0x15 +#define ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2 0x16 /* Shared with CV/TV and CRT */ +#define ENCODER_OBJECT_ID_SI178 0X17 /* External TMDS (dual link, no HDCP.) */ +#define ENCODER_OBJECT_ID_MVPU_FPGA 0x18 /* MVPU FPGA chip */ +#define ENCODER_OBJECT_ID_INTERNAL_DDI 0x19 +#define ENCODER_OBJECT_ID_VT1625 0x1A +#define ENCODER_OBJECT_ID_HDMI_SI1932 0x1B +#define ENCODER_OBJECT_ID_DP_AN9801 0x1C +#define ENCODER_OBJECT_ID_DP_DP501 0x1D +#define ENCODER_OBJECT_ID_INTERNAL_UNIPHY 0x1E +#define ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA 0x1F + +/****************************************************/ +/* Connector Object ID Definition */ +/****************************************************/ +#define CONNECTOR_OBJECT_ID_NONE 0x00 +#define CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_I 0x01 +#define CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_I 0x02 +#define CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_D 0x03 +#define CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_D 0x04 +#define CONNECTOR_OBJECT_ID_VGA 0x05 +#define CONNECTOR_OBJECT_ID_COMPOSITE 0x06 +#define CONNECTOR_OBJECT_ID_SVIDEO 0x07 +#define CONNECTOR_OBJECT_ID_YPbPr 0x08 +#define CONNECTOR_OBJECT_ID_D_CONNECTOR 0x09 +#define CONNECTOR_OBJECT_ID_9PIN_DIN 0x0A /* Supports both CV & TV */ +#define CONNECTOR_OBJECT_ID_SCART 0x0B +#define CONNECTOR_OBJECT_ID_HDMI_TYPE_A 0x0C +#define CONNECTOR_OBJECT_ID_HDMI_TYPE_B 0x0D +#define CONNECTOR_OBJECT_ID_LVDS 0x0E +#define CONNECTOR_OBJECT_ID_7PIN_DIN 0x0F +#define CONNECTOR_OBJECT_ID_PCIE_CONNECTOR 0x10 +#define CONNECTOR_OBJECT_ID_CROSSFIRE 0x11 +#define CONNECTOR_OBJECT_ID_HARDCODE_DVI 0x12 +#define CONNECTOR_OBJECT_ID_DISPLAYPORT 0x13 + +/* deleted */ + +/****************************************************/ +/* Router Object ID Definition */ +/****************************************************/ +#define ROUTER_OBJECT_ID_NONE 0x00 +#define ROUTER_OBJECT_ID_I2C_EXTENDER_CNTL 0x01 + +/****************************************************/ +// Graphics Object ENUM ID Definition */ +/****************************************************/ +#define GRAPH_OBJECT_ENUM_ID1 0x01 +#define GRAPH_OBJECT_ENUM_ID2 0x02 +#define GRAPH_OBJECT_ENUM_ID3 0x03 +#define GRAPH_OBJECT_ENUM_ID4 0x04 + +/****************************************************/ +/* Graphics Object ID Bit definition */ +/****************************************************/ +#define OBJECT_ID_MASK 0x00FF +#define ENUM_ID_MASK 0x0700 +#define RESERVED1_ID_MASK 0x0800 +#define OBJECT_TYPE_MASK 0x7000 +#define RESERVED2_ID_MASK 0x8000 + +#define OBJECT_ID_SHIFT 0x00 +#define ENUM_ID_SHIFT 0x08 +#define OBJECT_TYPE_SHIFT 0x0C + + +/****************************************************/ +/* Graphics Object family definition */ +/****************************************************/ +#define CONSTRUCTOBJECTFAMILYID(GRAPHICS_OBJECT_TYPE, GRAPHICS_OBJECT_ID) (GRAPHICS_OBJECT_TYPE << OBJECT_TYPE_SHIFT | \ + GRAPHICS_OBJECT_ID << OBJECT_ID_SHIFT) +/****************************************************/ +/* GPU Object ID definition - Shared with BIOS */ +/****************************************************/ +#define GPU_ENUM_ID1 ( GRAPH_OBJECT_TYPE_GPU << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT) + +/****************************************************/ +/* Encoder Object ID definition - Shared with BIOS */ +/****************************************************/ +/* +#define ENCODER_INTERNAL_LVDS_ENUM_ID1 0x2101 +#define ENCODER_INTERNAL_TMDS1_ENUM_ID1 0x2102 +#define ENCODER_INTERNAL_TMDS2_ENUM_ID1 0x2103 +#define ENCODER_INTERNAL_DAC1_ENUM_ID1 0x2104 +#define ENCODER_INTERNAL_DAC2_ENUM_ID1 0x2105 +#define ENCODER_INTERNAL_SDVOA_ENUM_ID1 0x2106 +#define ENCODER_INTERNAL_SDVOB_ENUM_ID1 0x2107 +#define ENCODER_SIL170B_ENUM_ID1 0x2108 +#define ENCODER_CH7303_ENUM_ID1 0x2109 +#define ENCODER_CH7301_ENUM_ID1 0x210A +#define ENCODER_INTERNAL_DVO1_ENUM_ID1 0x210B +#define ENCODER_EXTERNAL_SDVOA_ENUM_ID1 0x210C +#define ENCODER_EXTERNAL_SDVOB_ENUM_ID1 0x210D +#define ENCODER_TITFP513_ENUM_ID1 0x210E +#define ENCODER_INTERNAL_LVTM1_ENUM_ID1 0x210F +#define ENCODER_VT1623_ENUM_ID1 0x2110 +#define ENCODER_HDMI_SI1930_ENUM_ID1 0x2111 +#define ENCODER_HDMI_INTERNAL_ENUM_ID1 0x2112 +#define ENCODER_INTERNAL_KLDSCP_TMDS1_ENUM_ID1 0x2113 +#define ENCODER_INTERNAL_KLDSCP_DVO1_ENUM_ID1 0x2114 +#define ENCODER_INTERNAL_KLDSCP_DAC1_ENUM_ID1 0x2115 +#define ENCODER_INTERNAL_KLDSCP_DAC2_ENUM_ID1 0x2116 +#define ENCODER_SI178_ENUM_ID1 0x2117 +#define ENCODER_MVPU_FPGA_ENUM_ID1 0x2118 +#define ENCODER_INTERNAL_DDI_ENUM_ID1 0x2119 +#define ENCODER_VT1625_ENUM_ID1 0x211A +#define ENCODER_HDMI_SI1932_ENUM_ID1 0x211B +#define ENCODER_ENCODER_DP_AN9801_ENUM_ID1 0x211C +#define ENCODER_DP_DP501_ENUM_ID1 0x211D +#define ENCODER_INTERNAL_UNIPHY_ENUM_ID1 0x211E +*/ +#define ENCODER_INTERNAL_LVDS_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_LVDS << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_TMDS1_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_TMDS1 << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_TMDS2_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_TMDS2 << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_DAC1_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_DAC1 << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_DAC2_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_DAC2 << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_SDVOA_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_SDVOA << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_SDVOA_ENUM_ID2 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_SDVOA << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_SDVOB_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_SDVOB << OBJECT_ID_SHIFT) + +#define ENCODER_SIL170B_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_SI170B << OBJECT_ID_SHIFT) + +#define ENCODER_CH7303_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_CH7303 << OBJECT_ID_SHIFT) + +#define ENCODER_CH7301_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_CH7301 << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_DVO1_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_DVO1 << OBJECT_ID_SHIFT) + +#define ENCODER_EXTERNAL_SDVOA_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_EXTERNAL_SDVOA << OBJECT_ID_SHIFT) + +#define ENCODER_EXTERNAL_SDVOA_ENUM_ID2 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_EXTERNAL_SDVOA << OBJECT_ID_SHIFT) + + +#define ENCODER_EXTERNAL_SDVOB_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_EXTERNAL_SDVOB << OBJECT_ID_SHIFT) + + +#define ENCODER_TITFP513_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_TITFP513 << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_LVTM1_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_LVTM1 << OBJECT_ID_SHIFT) + +#define ENCODER_VT1623_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_VT1623 << OBJECT_ID_SHIFT) + +#define ENCODER_HDMI_SI1930_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_HDMI_SI1930 << OBJECT_ID_SHIFT) + +#define ENCODER_HDMI_INTERNAL_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_HDMI_INTERNAL << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_KLDSCP_TMDS1_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1 << OBJECT_ID_SHIFT) + + +#define ENCODER_INTERNAL_KLDSCP_TMDS1_ENUM_ID2 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1 << OBJECT_ID_SHIFT) + + +#define ENCODER_INTERNAL_KLDSCP_DVO1_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1 << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_KLDSCP_DAC1_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1 << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_KLDSCP_DAC2_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2 << OBJECT_ID_SHIFT) // Shared with CV/TV and CRT + +#define ENCODER_SI178_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_SI178 << OBJECT_ID_SHIFT) + +#define ENCODER_MVPU_FPGA_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_MVPU_FPGA << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_DDI_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_DDI << OBJECT_ID_SHIFT) + +#define ENCODER_VT1625_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_VT1625 << OBJECT_ID_SHIFT) + +#define ENCODER_HDMI_SI1932_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_HDMI_SI1932 << OBJECT_ID_SHIFT) + +#define ENCODER_DP_DP501_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_DP_DP501 << OBJECT_ID_SHIFT) + +#define ENCODER_DP_AN9801_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_DP_AN9801 << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_UNIPHY_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_UNIPHY << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_UNIPHY_ENUM_ID2 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_UNIPHY << OBJECT_ID_SHIFT) + +#define ENCODER_INTERNAL_KLDSCP_LVTMA_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA << OBJECT_ID_SHIFT) + +/****************************************************/ +/* Connector Object ID definition - Shared with BIOS */ +/****************************************************/ +/* +#define CONNECTOR_SINGLE_LINK_DVI_I_ENUM_ID1 0x3101 +#define CONNECTOR_DUAL_LINK_DVI_I_ENUM_ID1 0x3102 +#define CONNECTOR_SINGLE_LINK_DVI_D_ENUM_ID1 0x3103 +#define CONNECTOR_DUAL_LINK_DVI_D_ENUM_ID1 0x3104 +#define CONNECTOR_VGA_ENUM_ID1 0x3105 +#define CONNECTOR_COMPOSITE_ENUM_ID1 0x3106 +#define CONNECTOR_SVIDEO_ENUM_ID1 0x3107 +#define CONNECTOR_YPbPr_ENUM_ID1 0x3108 +#define CONNECTOR_D_CONNECTORE_ENUM_ID1 0x3109 +#define CONNECTOR_9PIN_DIN_ENUM_ID1 0x310A +#define CONNECTOR_SCART_ENUM_ID1 0x310B +#define CONNECTOR_HDMI_TYPE_A_ENUM_ID1 0x310C +#define CONNECTOR_HDMI_TYPE_B_ENUM_ID1 0x310D +#define CONNECTOR_LVDS_ENUM_ID1 0x310E +#define CONNECTOR_7PIN_DIN_ENUM_ID1 0x310F +#define CONNECTOR_PCIE_CONNECTOR_ENUM_ID1 0x3110 +*/ +#define CONNECTOR_LVDS_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_LVDS << OBJECT_ID_SHIFT) + +#define CONNECTOR_SINGLE_LINK_DVI_I_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_I << OBJECT_ID_SHIFT) + +#define CONNECTOR_SINGLE_LINK_DVI_I_ENUM_ID2 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_I << OBJECT_ID_SHIFT) + +#define CONNECTOR_DUAL_LINK_DVI_I_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_I << OBJECT_ID_SHIFT) + +#define CONNECTOR_DUAL_LINK_DVI_I_ENUM_ID2 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_I << OBJECT_ID_SHIFT) + +#define CONNECTOR_SINGLE_LINK_DVI_D_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_D << OBJECT_ID_SHIFT) + +#define CONNECTOR_SINGLE_LINK_DVI_D_ENUM_ID2 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_D << OBJECT_ID_SHIFT) + +#define CONNECTOR_DUAL_LINK_DVI_D_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_D << OBJECT_ID_SHIFT) + +#define CONNECTOR_VGA_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_VGA << OBJECT_ID_SHIFT) + +#define CONNECTOR_VGA_ENUM_ID2 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_VGA << OBJECT_ID_SHIFT) + +#define CONNECTOR_COMPOSITE_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_COMPOSITE << OBJECT_ID_SHIFT) + +#define CONNECTOR_SVIDEO_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_SVIDEO << OBJECT_ID_SHIFT) + +#define CONNECTOR_YPbPr_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_YPbPr << OBJECT_ID_SHIFT) + +#define CONNECTOR_D_CONNECTOR_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_D_CONNECTOR << OBJECT_ID_SHIFT) + +#define CONNECTOR_9PIN_DIN_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_9PIN_DIN << OBJECT_ID_SHIFT) + +#define CONNECTOR_SCART_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_SCART << OBJECT_ID_SHIFT) + +#define CONNECTOR_HDMI_TYPE_A_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_HDMI_TYPE_A << OBJECT_ID_SHIFT) + +#define CONNECTOR_HDMI_TYPE_B_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_HDMI_TYPE_B << OBJECT_ID_SHIFT) + +#define CONNECTOR_7PIN_DIN_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_7PIN_DIN << OBJECT_ID_SHIFT) + +#define CONNECTOR_PCIE_CONNECTOR_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_PCIE_CONNECTOR << OBJECT_ID_SHIFT) + +#define CONNECTOR_PCIE_CONNECTOR_ENUM_ID2 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_PCIE_CONNECTOR << OBJECT_ID_SHIFT) + +#define CONNECTOR_CROSSFIRE_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_CROSSFIRE << OBJECT_ID_SHIFT) + +#define CONNECTOR_CROSSFIRE_ENUM_ID2 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_CROSSFIRE << OBJECT_ID_SHIFT) + + +#define CONNECTOR_HARDCODE_DVI_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_HARDCODE_DVI << OBJECT_ID_SHIFT) + +#define CONNECTOR_HARDCODE_DVI_ENUM_ID2 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_HARDCODE_DVI << OBJECT_ID_SHIFT) + +#define CONNECTOR_DISPLAYPORT_ENUM_ID1 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_DISPLAYPORT << OBJECT_ID_SHIFT) + +#define CONNECTOR_DISPLAYPORT_ENUM_ID2 ( GRAPH_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\ + CONNECTOR_OBJECT_ID_DISPLAYPORT << OBJECT_ID_SHIFT) + +/****************************************************/ +/* Router Object ID definition - Shared with BIOS */ +/****************************************************/ +#define ROUTER_I2C_EXTENDER_CNTL_ENUM_ID1 ( GRAPH_OBJECT_TYPE_ROUTER << OBJECT_TYPE_SHIFT |\ + GRAPH_OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\ + ROUTER_OBJECT_ID_I2C_EXTENDER_CNTL << OBJECT_ID_SHIFT) + +/* deleted */ + +/****************************************************/ +/* Object Cap definition - Shared with BIOS */ +/****************************************************/ +#define GRAPHICS_OBJECT_CAP_I2C 0x00000001L +#define GRAPHICS_OBJECT_CAP_TABLE_ID 0x00000002L + + +#define GRAPHICS_OBJECT_I2CCOMMAND_TABLE_ID 0x01 +#define GRAPHICS_OBJECT_HOTPLUGDETECTIONINTERUPT_TABLE_ID 0x02 +#define GRAPHICS_OBJECT_ENCODER_OUTPUT_PROTECTION_TABLE_ID 0x03 + +#if defined(_X86_) +#pragma pack() +#endif + +#endif /*GRAPHICTYPE */ + + + + diff --git a/drivers/gpu/drm/radeon/atom-bits.h b/drivers/gpu/drm/radeon/atom-bits.h new file mode 100644 index 0000000..f94d2e2 --- /dev/null +++ b/drivers/gpu/drm/radeon/atom-bits.h @@ -0,0 +1,48 @@ +/* + * Copyright 2008 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Author: Stanislaw Skowronek + */ + +#ifndef ATOM_BITS_H +#define ATOM_BITS_H + +static inline uint8_t get_u8(void *bios, int ptr) +{ + return ((unsigned char *)bios)[ptr]; +} +#define U8(ptr) get_u8(ctx->ctx->bios,(ptr)) +#define CU8(ptr) get_u8(ctx->bios,(ptr)) +static inline uint16_t get_u16(void *bios, int ptr) +{ + return get_u8(bios,ptr)|(((uint16_t)get_u8(bios,ptr+1))<<8); +} +#define U16(ptr) get_u16(ctx->ctx->bios,(ptr)) +#define CU16(ptr) get_u16(ctx->bios,(ptr)) +static inline uint32_t get_u32(void *bios, int ptr) +{ + return get_u16(bios,ptr)|(((uint32_t)get_u16(bios,ptr+2))<<16); +} +#define U32(ptr) get_u32(ctx->ctx->bios,(ptr)) +#define CU32(ptr) get_u32(ctx->bios,(ptr)) +#define CSTR(ptr) (((char *)(ctx->bios))+(ptr)) + +#endif diff --git a/drivers/gpu/drm/radeon/atom-names.h b/drivers/gpu/drm/radeon/atom-names.h new file mode 100644 index 0000000..2cdc170 --- /dev/null +++ b/drivers/gpu/drm/radeon/atom-names.h @@ -0,0 +1,100 @@ +/* + * Copyright 2008 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Author: Stanislaw Skowronek + */ + +#ifndef ATOM_NAMES_H +#define ATOM_NAMES_H + +#include "atom.h" + +#ifdef ATOM_DEBUG + +#define ATOM_OP_NAMES_CNT 123 +static char *atom_op_names[ATOM_OP_NAMES_CNT]={ +"RESERVED", "MOVE_REG", "MOVE_PS", "MOVE_WS", "MOVE_FB", "MOVE_PLL", +"MOVE_MC", "AND_REG", "AND_PS", "AND_WS", "AND_FB", "AND_PLL", "AND_MC", +"OR_REG", "OR_PS", "OR_WS", "OR_FB", "OR_PLL", "OR_MC", "SHIFT_LEFT_REG", +"SHIFT_LEFT_PS", "SHIFT_LEFT_WS", "SHIFT_LEFT_FB", "SHIFT_LEFT_PLL", +"SHIFT_LEFT_MC", "SHIFT_RIGHT_REG", "SHIFT_RIGHT_PS", "SHIFT_RIGHT_WS", +"SHIFT_RIGHT_FB", "SHIFT_RIGHT_PLL", "SHIFT_RIGHT_MC", "MUL_REG", +"MUL_PS", "MUL_WS", "MUL_FB", "MUL_PLL", "MUL_MC", "DIV_REG", "DIV_PS", +"DIV_WS", "DIV_FB", "DIV_PLL", "DIV_MC", "ADD_REG", "ADD_PS", "ADD_WS", +"ADD_FB", "ADD_PLL", "ADD_MC", "SUB_REG", "SUB_PS", "SUB_WS", "SUB_FB", +"SUB_PLL", "SUB_MC", "SET_ATI_PORT", "SET_PCI_PORT", "SET_SYS_IO_PORT", +"SET_REG_BLOCK", "SET_FB_BASE", "COMPARE_REG", "COMPARE_PS", +"COMPARE_WS", "COMPARE_FB", "COMPARE_PLL", "COMPARE_MC", "SWITCH", +"JUMP", "JUMP_EQUAL", "JUMP_BELOW", "JUMP_ABOVE", "JUMP_BELOW_OR_EQUAL", +"JUMP_ABOVE_OR_EQUAL", "JUMP_NOT_EQUAL", "TEST_REG", "TEST_PS", "TEST_WS", +"TEST_FB", "TEST_PLL", "TEST_MC", "DELAY_MILLISEC", "DELAY_MICROSEC", +"CALL_TABLE", "REPEAT", "CLEAR_REG", "CLEAR_PS", "CLEAR_WS", "CLEAR_FB", +"CLEAR_PLL", "CLEAR_MC", "NOP", "EOT", "MASK_REG", "MASK_PS", "MASK_WS", +"MASK_FB", "MASK_PLL", "MASK_MC", "POST_CARD", "BEEP", "SAVE_REG", +"RESTORE_REG", "SET_DATA_BLOCK", "XOR_REG", "XOR_PS", "XOR_WS", "XOR_FB", +"XOR_PLL", "XOR_MC", "SHL_REG", "SHL_PS", "SHL_WS", "SHL_FB", "SHL_PLL", +"SHL_MC", "SHR_REG", "SHR_PS", "SHR_WS", "SHR_FB", "SHR_PLL", "SHR_MC", +"DEBUG", "CTB_DS", +}; + +#define ATOM_TABLE_NAMES_CNT 74 +static char *atom_table_names[ATOM_TABLE_NAMES_CNT]={ +"ASIC_Init", "GetDisplaySurfaceSize", "ASIC_RegistersInit", +"VRAM_BlockVenderDetection", "SetClocksRatio", "MemoryControllerInit", +"GPIO_PinInit", "MemoryParamAdjust", "DVOEncoderControl", +"GPIOPinControl", "SetEngineClock", "SetMemoryClock", "SetPixelClock", +"DynamicClockGating", "ResetMemoryDLL", "ResetMemoryDevice", +"MemoryPLLInit", "EnableMemorySelfRefresh", "AdjustMemoryController", +"EnableASIC_StaticPwrMgt", "ASIC_StaticPwrMgtStatusChange", +"DAC_LoadDetection", "TMDS2EncoderControl", "LCD1OutputControl", +"DAC1EncoderControl", "DAC2EncoderControl", "DVOOutputControl", +"CV1OutputControl", "SetCRTC_DPM_State", "TVEncoderControl", +"TMDS1EncoderControl", "LVDSEncoderControl", "TV1OutputControl", +"EnableScaler", "BlankCRTC", "EnableCRTC", "GetPixelClock", +"EnableVGA_Render", "EnableVGA_Access", "SetCRTC_Timing", +"SetCRTC_OverScan", "SetCRTC_Replication", "SelectCRTC_Source", +"EnableGraphSurfaces", "UpdateCRTC_DoubleBufferRegisters", +"LUT_AutoFill", "EnableHW_IconCursor", "GetMemoryClock", +"GetEngineClock", "SetCRTC_UsingDTDTiming", "TVBootUpStdPinDetection", +"DFP2OutputControl", "VRAM_BlockDetectionByStrap", "MemoryCleanUp", +"ReadEDIDFromHWAssistedI2C", "WriteOneByteToHWAssistedI2C", +"ReadHWAssistedI2CStatus", "SpeedFanControl", "PowerConnectorDetection", +"MC_Synchronization", "ComputeMemoryEnginePLL", "MemoryRefreshConversion", +"VRAM_GetCurrentInfoBlock", "DynamicMemorySettings", "MemoryTraining", +"EnableLVDS_SS", "DFP1OutputControl", "SetVoltage", "CRT1OutputControl", +"CRT2OutputControl", "SetupHWAssistedI2CStatus", "ClockSource", +"MemoryDeviceInit", "EnableYUV", +}; + +#define ATOM_IO_NAMES_CNT 5 +static char *atom_io_names[ATOM_IO_NAMES_CNT]={ +"MM", "PLL", "MC", "PCIE", "PCIE PORT", +}; + +#else + +#define ATOM_OP_NAMES_CNT 0 +#define ATOM_TABLE_NAMES_CNT 0 +#define ATOM_IO_NAMES_CNT 0 + +#endif + +#endif diff --git a/drivers/gpu/drm/radeon/atom-types.h b/drivers/gpu/drm/radeon/atom-types.h new file mode 100644 index 0000000..1125b86 --- /dev/null +++ b/drivers/gpu/drm/radeon/atom-types.h @@ -0,0 +1,42 @@ +/* + * Copyright 2008 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Author: Dave Airlie + */ + +#ifndef ATOM_TYPES_H +#define ATOM_TYPES_H + +/* sync atom types to kernel types */ + +typedef uint16_t USHORT; +typedef uint32_t ULONG; +typedef uint8_t UCHAR; + + +#ifndef ATOM_BIG_ENDIAN +#if defined(__BIG_ENDIAN) +#define ATOM_BIG_ENDIAN 1 +#else +#define ATOM_BIG_ENDIAN 0 +#endif +#endif +#endif diff --git a/drivers/gpu/drm/radeon/atom.c b/drivers/gpu/drm/radeon/atom.c new file mode 100644 index 0000000..2a660a4 --- /dev/null +++ b/drivers/gpu/drm/radeon/atom.c @@ -0,0 +1,1143 @@ +/* + * Copyright 2008 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Author: Stanislaw Skowronek + */ + +#include <linux/module.h> +#include <linux/sched.h> + +#define ATOM_DEBUG + +#include "atom.h" +#include "atom-names.h" +#include "atom-bits.h" + +#define ATOM_COND_ABOVE 0 +#define ATOM_COND_ABOVEOREQUAL 1 +#define ATOM_COND_ALWAYS 2 +#define ATOM_COND_BELOW 3 +#define ATOM_COND_BELOWOREQUAL 4 +#define ATOM_COND_EQUAL 5 +#define ATOM_COND_NOTEQUAL 6 + +#define ATOM_PORT_ATI 0 +#define ATOM_PORT_PCI 1 +#define ATOM_PORT_SYSIO 2 + +#define ATOM_UNIT_MICROSEC 0 +#define ATOM_UNIT_MILLISEC 1 + +#define PLL_INDEX 2 +#define PLL_DATA 3 + +typedef struct { + struct atom_context *ctx; + + uint32_t *ps, *ws; + int ps_shift; + uint16_t start; +} atom_exec_context; + +int atom_debug = 0; +void atom_execute_table(struct atom_context *ctx, int index, uint32_t *params); + +static uint32_t atom_arg_mask[8] = {0xFFFFFFFF, 0xFFFF, 0xFFFF00, 0xFFFF0000, 0xFF, 0xFF00, 0xFF0000, 0xFF000000}; +static int atom_arg_shift[8] = {0, 0, 8, 16, 0, 8, 16, 24}; +static int atom_dst_to_src[8][4] = { // translate destination alignment field to the source alignment encoding + { 0, 0, 0, 0 }, + { 1, 2, 3, 0 }, + { 1, 2, 3, 0 }, + { 1, 2, 3, 0 }, + { 4, 5, 6, 7 }, + { 4, 5, 6, 7 }, + { 4, 5, 6, 7 }, + { 4, 5, 6, 7 }, +}; +static int atom_def_dst[8] = { 0, 0, 1, 2, 0, 1, 2, 3 }; + +static int debug_depth = 0; +#ifdef ATOM_DEBUG +static void debug_print_spaces(int n) +{ + while(n--) + printk(" "); +} +#define DEBUG(...) do if(atom_debug) { printk(KERN_DEBUG __VA_ARGS__); } while(0) +#define SDEBUG(...) do if(atom_debug) { printk(KERN_DEBUG); debug_print_spaces(debug_depth); printk(__VA_ARGS__); } while(0) +#else +#define DEBUG(...) do { } while(0) +#define SDEBUG(...) do { } while(0) +#endif + +static uint32_t atom_iio_execute(struct atom_context *ctx, int base, uint32_t index, uint32_t data) +{ + uint32_t temp = 0xCDCDCDCD; + while(1) + switch(CU8(base)) { + case ATOM_IIO_NOP: + base++; + break; + case ATOM_IIO_READ: + temp = ctx->card->reg_read(ctx->card, CU16(base+1)); + base+=3; + break; + case ATOM_IIO_WRITE: + ctx->card->reg_write(ctx->card, CU16(base+1), temp); + base+=3; + break; + case ATOM_IIO_CLEAR: + temp &= ~((0xFFFFFFFF >> (32-CU8(base+1))) << CU8(base+2)); + base+=3; + break; + case ATOM_IIO_SET: + temp |= (0xFFFFFFFF >> (32-CU8(base+1))) << CU8(base+2); + base+=3; + break; + case ATOM_IIO_MOVE_INDEX: + temp &= ~((0xFFFFFFFF >> (32-CU8(base+1))) << CU8(base+2)); + temp |= ((index >> CU8(base+2)) & (0xFFFFFFFF >> (32-CU8(base+1)))) << CU8(base+3); + base+=4; + break; + case ATOM_IIO_MOVE_DATA: + temp &= ~((0xFFFFFFFF >> (32-CU8(base+1))) << CU8(base+2)); + temp |= ((data >> CU8(base+2)) & (0xFFFFFFFF >> (32-CU8(base+1)))) << CU8(base+3); + base+=4; + break; + case ATOM_IIO_MOVE_ATTR: + temp &= ~((0xFFFFFFFF >> (32-CU8(base+1))) << CU8(base+2)); + temp |= ((ctx->io_attr >> CU8(base+2)) & (0xFFFFFFFF >> (32-CU8(base+1)))) << CU8(base+3); + base+=4; + break; + case ATOM_IIO_END: + return temp; + default: + printk(KERN_INFO "Unknown IIO opcode.\n"); + return 0; + } +} + +static uint32_t atom_get_src_int(atom_exec_context *ctx, uint8_t attr, int *ptr, uint32_t *saved, int print) +{ + uint32_t idx, val = 0xCDCDCDCD, align, arg; + struct atom_context *gctx = ctx->ctx; + arg = attr & 7; + align = (attr >> 3) & 7; + switch(arg) { + case ATOM_ARG_REG: + idx = U16(*ptr); + (*ptr)+=2; + if(print) + DEBUG("REG[0x%04X]", idx); + idx += gctx->reg_block; + switch(gctx->io_mode) { + case ATOM_IO_MM: + val = gctx->card->reg_read(gctx->card, idx); + break; + case ATOM_IO_PCI: + printk(KERN_INFO "PCI registers are not implemented.\n"); + return 0; + case ATOM_IO_SYSIO: + printk(KERN_INFO "SYSIO registers are not implemented.\n"); + return 0; + default: + if(!(gctx->io_mode&0x80)) { + printk(KERN_INFO "Bad IO mode.\n"); + return 0; + } + if(!gctx->iio[gctx->io_mode&0x7F]) { + printk(KERN_INFO "Undefined indirect IO read method %d.\n", gctx->io_mode&0x7F); + return 0; + } + val = atom_iio_execute(gctx, gctx->iio[gctx->io_mode&0x7F], idx, 0); + } + break; + case ATOM_ARG_PS: + idx = U8(*ptr); + (*ptr)++; + val = ctx->ps[idx]; + if(print) + DEBUG("PS[0x%02X,0x%04X]", idx, val); + break; + case ATOM_ARG_WS: + idx = U8(*ptr); + (*ptr)++; + if(print) + DEBUG("WS[0x%02X]", idx); + switch(idx) { + case ATOM_WS_QUOTIENT: + val = gctx->divmul[0]; + break; + case ATOM_WS_REMAINDER: + val = gctx->divmul[1]; + break; + case ATOM_WS_DATAPTR: + val = gctx->data_block; + break; + case ATOM_WS_SHIFT: + val = gctx->shift; + break; + case ATOM_WS_OR_MASK: + val = 1<<gctx->shift; + break; + case ATOM_WS_AND_MASK: + val = ~(1<<gctx->shift); + break; + case ATOM_WS_FB_WINDOW: + val = gctx->fb_base; + break; + case ATOM_WS_ATTRIBUTES: + val = gctx->io_attr; + break; + default: + val = ctx->ws[idx]; + } + break; + case ATOM_ARG_ID: + idx = U16(*ptr); + (*ptr)+=2; + if(print) { + if(gctx->data_block) + DEBUG("ID[0x%04X+%04X]", idx, gctx->data_block); + else + DEBUG("ID[0x%04X]", idx); + } + val = U32(idx + gctx->data_block); + break; + case ATOM_ARG_FB: + idx = U8(*ptr); + (*ptr)++; + if(print) + DEBUG("FB[0x%02X]", idx); + printk(KERN_INFO "FB access is not implemented.\n"); + return 0; + case ATOM_ARG_IMM: + switch(align) { + case ATOM_SRC_DWORD: + val = U32(*ptr); + (*ptr)+=4; + if(print) + DEBUG("IMM 0x%08X\n", val); + return val; + case ATOM_SRC_WORD0: + case ATOM_SRC_WORD8: + case ATOM_SRC_WORD16: + val = U16(*ptr); + (*ptr)+=2; + if(print) + DEBUG("IMM 0x%04X\n", val); + return val; + case ATOM_SRC_BYTE0: + case ATOM_SRC_BYTE8: + case ATOM_SRC_BYTE16: + case ATOM_SRC_BYTE24: + val = U8(*ptr); + (*ptr)++; + if(print) + DEBUG("IMM 0x%02X\n", val); + return val; + } + return 0; + case ATOM_ARG_PLL: + idx = U8(*ptr); + (*ptr)++; + if(print) + DEBUG("PLL[0x%02X]", idx); + gctx->card->reg_write(gctx->card, PLL_INDEX, idx); + val = gctx->card->reg_read(gctx->card, PLL_DATA); + break; + case ATOM_ARG_MC: + idx = U8(*ptr); + (*ptr)++; + if(print) + DEBUG("MC[0x%02X]", idx); + val = gctx->card->mc_read(gctx->card, idx); + return 0; + } + if(saved) + *saved = val; + val &= atom_arg_mask[align]; + val >>= atom_arg_shift[align]; + if(print) + switch(align) { + case ATOM_SRC_DWORD: + DEBUG(".[31:0] -> 0x%08X\n", val); + break; + case ATOM_SRC_WORD0: + DEBUG(".[15:0] -> 0x%04X\n", val); + break; + case ATOM_SRC_WORD8: + DEBUG(".[23:8] -> 0x%04X\n", val); + break; + case ATOM_SRC_WORD16: + DEBUG(".[31:16] -> 0x%04X\n", val); + break; + case ATOM_SRC_BYTE0: + DEBUG(".[7:0] -> 0x%02X\n", val); + break; + case ATOM_SRC_BYTE8: + DEBUG(".[15:8] -> 0x%02X\n", val); + break; + case ATOM_SRC_BYTE16: + DEBUG(".[23:16] -> 0x%02X\n", val); + break; + case ATOM_SRC_BYTE24: + DEBUG(".[31:24] -> 0x%02X\n", val); + break; + } + return val; +} + +static void atom_skip_src_int(atom_exec_context *ctx, uint8_t attr, int *ptr) +{ + uint32_t align = (attr >> 3) & 7, arg = attr & 7; + switch(arg) { + case ATOM_ARG_REG: + case ATOM_ARG_ID: + (*ptr)+=2; + break; + case ATOM_ARG_PLL: + case ATOM_ARG_MC: + case ATOM_ARG_PS: + case ATOM_ARG_WS: + case ATOM_ARG_FB: + (*ptr)++; + break; + case ATOM_ARG_IMM: + switch(align) { + case ATOM_SRC_DWORD: + (*ptr)+=4; + return; + case ATOM_SRC_WORD0: + case ATOM_SRC_WORD8: + case ATOM_SRC_WORD16: + (*ptr)+=2; + return; + case ATOM_SRC_BYTE0: + case ATOM_SRC_BYTE8: + case ATOM_SRC_BYTE16: + case ATOM_SRC_BYTE24: + (*ptr)++; + return; + } + return; + } +} + +static uint32_t atom_get_src(atom_exec_context *ctx, uint8_t attr, int *ptr) +{ + return atom_get_src_int(ctx, attr, ptr, NULL, 1); +} + +static uint32_t atom_get_dst(atom_exec_context *ctx, int arg, uint8_t attr, int *ptr, uint32_t *saved, int print) +{ + return atom_get_src_int(ctx, arg|atom_dst_to_src[(attr>>3)&7][(attr>>6)&3]<<3, ptr, saved, print); +} + +static void atom_skip_dst(atom_exec_context *ctx, int arg, uint8_t attr, int *ptr) +{ + atom_skip_src_int(ctx, arg|atom_dst_to_src[(attr>>3)&7][(attr>>6)&3]<<3, ptr); +} + +static void atom_put_dst(atom_exec_context *ctx, int arg, uint8_t attr, int *ptr, uint32_t val, uint32_t saved) +{ + uint32_t align = atom_dst_to_src[(attr>>3)&7][(attr>>6)&3], old_val = val, idx; + struct atom_context *gctx = ctx->ctx; + old_val &= atom_arg_mask[align] >> atom_arg_shift[align]; + val <<= atom_arg_shift[align]; + val &= atom_arg_mask[align]; + saved &= ~atom_arg_mask[align]; + val |= saved; + switch(arg) { + case ATOM_ARG_REG: + idx = U16(*ptr); + (*ptr)+=2; + DEBUG("REG[0x%04X]", idx); + idx += gctx->reg_block; + switch(gctx->io_mode) { + case ATOM_IO_MM: + if(idx == 0) + gctx->card->reg_write(gctx->card, idx, val<<2); + else + gctx->card->reg_write(gctx->card, idx, val); + break; + case ATOM_IO_PCI: + printk(KERN_INFO "PCI registers are not implemented.\n"); + return; + case ATOM_IO_SYSIO: + printk(KERN_INFO "SYSIO registers are not implemented.\n"); + return; + default: + if(!(gctx->io_mode&0x80)) { + printk(KERN_INFO "Bad IO mode.\n"); + return; + } + if(!gctx->iio[gctx->io_mode&0xFF]) { + printk(KERN_INFO "Undefined indirect IO write method %d.\n", gctx->io_mode&0x7F); + return; + } + atom_iio_execute(gctx, gctx->iio[gctx->io_mode&0xFF], idx, val); + } + break; + case ATOM_ARG_PS: + idx = U8(*ptr); + (*ptr)++; + DEBUG("PS[0x%02X]", idx); + ctx->ps[idx] = val; + break; + case ATOM_ARG_WS: + idx = U8(*ptr); + (*ptr)++; + DEBUG("WS[0x%02X]", idx); + switch(idx) { + case ATOM_WS_QUOTIENT: + gctx->divmul[0] = val; + break; + case ATOM_WS_REMAINDER: + gctx->divmul[1] = val; + break; + case ATOM_WS_DATAPTR: + gctx->data_block = val; + break; + case ATOM_WS_SHIFT: + gctx->shift = val; + break; + case ATOM_WS_OR_MASK: + case ATOM_WS_AND_MASK: + break; + case ATOM_WS_FB_WINDOW: + gctx->fb_base = val; + break; + case ATOM_WS_ATTRIBUTES: + gctx->io_attr = val; + break; + default: + ctx->ws[idx] = val; + } + break; + case ATOM_ARG_FB: + idx = U8(*ptr); + (*ptr)++; + DEBUG("FB[0x%02X]", idx); + printk(KERN_INFO "FB access is not implemented.\n"); + return; + case ATOM_ARG_PLL: + idx = U8(*ptr); + (*ptr)++; + DEBUG("PLL[0x%02X]", idx); + gctx->card->reg_write(gctx->card, PLL_INDEX, idx); + gctx->card->reg_write(gctx->card, PLL_DATA, val); + break; + case ATOM_ARG_MC: + idx = U8(*ptr); + (*ptr)++; + DEBUG("MC[0x%02X]", idx); + gctx->card->mc_write(gctx->card, idx, val); + return; + } + switch(align) { + case ATOM_SRC_DWORD: + DEBUG(".[31:0] <- 0x%08X\n", old_val); + break; + case ATOM_SRC_WORD0: + DEBUG(".[15:0] <- 0x%04X\n", old_val); + break; + case ATOM_SRC_WORD8: + DEBUG(".[23:8] <- 0x%04X\n", old_val); + break; + case ATOM_SRC_WORD16: + DEBUG(".[31:16] <- 0x%04X\n", old_val); + break; + case ATOM_SRC_BYTE0: + DEBUG(".[7:0] <- 0x%02X\n", old_val); + break; + case ATOM_SRC_BYTE8: + DEBUG(".[15:8] <- 0x%02X\n", old_val); + break; + case ATOM_SRC_BYTE16: + DEBUG(".[23:16] <- 0x%02X\n", old_val); + break; + case ATOM_SRC_BYTE24: + DEBUG(".[31:24] <- 0x%02X\n", old_val); + break; + } +} + +static void atom_op_add(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t attr = U8((*ptr)++); + uint32_t dst, src, saved; + int dptr = *ptr; + SDEBUG(" dst: "); + dst = atom_get_dst(ctx, arg, attr, ptr, &saved, 1); + SDEBUG(" src: "); + src = atom_get_src(ctx, attr, ptr); + dst += src; + SDEBUG(" dst: "); + atom_put_dst(ctx, arg, attr, &dptr, dst, saved); +} + +static void atom_op_and(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t attr = U8((*ptr)++); + uint32_t dst, src, saved; + int dptr = *ptr; + SDEBUG(" dst: "); + dst = atom_get_dst(ctx, arg, attr, ptr, &saved, 1); + SDEBUG(" src: "); + src = atom_get_src(ctx, attr, ptr); + dst &= src; + SDEBUG(" dst: "); + atom_put_dst(ctx, arg, attr, &dptr, dst, saved); +} + +static void atom_op_beep(atom_exec_context *ctx, int *ptr, int arg) +{ + printk("ATOM BIOS beeped!\n"); +} + +static void atom_op_calltable(atom_exec_context *ctx, int *ptr, int arg) +{ + int idx = U8((*ptr)++); + if(idx < ATOM_TABLE_NAMES_CNT) + SDEBUG(" table: %d (%s)\n", idx, atom_table_names[idx]); + else + SDEBUG(" table: %d\n", idx); + if(U16(ctx->ctx->cmd_table + 4 + 2*idx)) + atom_execute_table(ctx->ctx, idx, ctx->ps+ctx->ps_shift); +} + +static void atom_op_clear(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t attr = U8((*ptr)++); + uint32_t saved; + int dptr = *ptr; + attr &= 0x38; + attr |= atom_def_dst[attr>>3]<<6; + atom_get_dst(ctx, arg, attr, ptr, &saved, 0); + SDEBUG(" dst: "); + atom_put_dst(ctx, arg, attr, &dptr, 0, saved); +} + +static void atom_op_compare(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t attr = U8((*ptr)++); + uint32_t dst, src; + SDEBUG(" src1: "); + dst = atom_get_dst(ctx, arg, attr, ptr, NULL, 1); + SDEBUG(" src2: "); + src = atom_get_src(ctx, attr, ptr); + ctx->ctx->cs_equal = (dst == src); + ctx->ctx->cs_above = (dst > src); + SDEBUG(" result: %s %s\n", ctx->ctx->cs_equal?"EQ":"NE", ctx->ctx->cs_above?"GT":"LE"); +} + +static void atom_op_delay(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t count = U8((*ptr)++); + SDEBUG(" count: %d\n", count); + if(arg == ATOM_UNIT_MICROSEC) + schedule_timeout_uninterruptible(usecs_to_jiffies(count)); + else + schedule_timeout_uninterruptible(msecs_to_jiffies(count)); +} + +static void atom_op_div(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t attr = U8((*ptr)++); + uint32_t dst, src; + SDEBUG(" src1: "); + dst = atom_get_dst(ctx, arg, attr, ptr, NULL, 1); + SDEBUG(" src2: "); + src = atom_get_src(ctx, attr, ptr); + if(src != 0) { + ctx->ctx->divmul[0] = dst/src; + ctx->ctx->divmul[1] = dst%src; + } else { + ctx->ctx->divmul[0] = 0; + ctx->ctx->divmul[1] = 0; + } +} + +static void atom_op_eot(atom_exec_context *ctx, int *ptr, int arg) +{ + /* functionally, a nop */ +} + +static void atom_op_jump(atom_exec_context *ctx, int *ptr, int arg) +{ + int execute = 0, target = U16(*ptr); + (*ptr)+=2; + switch(arg) { + case ATOM_COND_ABOVE: + execute = ctx->ctx->cs_above; + break; + case ATOM_COND_ABOVEOREQUAL: + execute = ctx->ctx->cs_above || ctx->ctx->cs_equal; + break; + case ATOM_COND_ALWAYS: + execute = 1; + break; + case ATOM_COND_BELOW: + execute = !(ctx->ctx->cs_above || ctx->ctx->cs_equal); + break; + case ATOM_COND_BELOWOREQUAL: + execute = !ctx->ctx->cs_above; + break; + case ATOM_COND_EQUAL: + execute = ctx->ctx->cs_equal; + break; + case ATOM_COND_NOTEQUAL: + execute = !ctx->ctx->cs_equal; + break; + } + if(arg != ATOM_COND_ALWAYS) + SDEBUG(" taken: %s\n", execute?"yes":"no"); + SDEBUG(" target: 0x%04X\n", target); + if(execute) + *ptr = ctx->start+target; +} + +static void atom_op_mask(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t attr = U8((*ptr)++); + uint32_t dst, src1, src2, saved; + int dptr = *ptr; + SDEBUG(" dst: "); + dst = atom_get_dst(ctx, arg, attr, ptr, &saved, 1); + SDEBUG(" src1: "); + src1 = atom_get_src(ctx, attr, ptr); + SDEBUG(" src2: "); + src2 = atom_get_src(ctx, attr, ptr); + dst &= src1; + dst |= src2; + SDEBUG(" dst: "); + atom_put_dst(ctx, arg, attr, &dptr, dst, saved); +} + +static void atom_op_move(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t attr = U8((*ptr)++); + uint32_t src, saved; + int dptr = *ptr; + if(((attr>>3)&7) != ATOM_SRC_DWORD) + atom_get_dst(ctx, arg, attr, ptr, &saved, 0); + else { + atom_skip_dst(ctx, arg, attr, ptr); + saved = 0xCDCDCDCD; + } + SDEBUG(" src: "); + src = atom_get_src(ctx, attr, ptr); + SDEBUG(" dst: "); + atom_put_dst(ctx, arg, attr, &dptr, src, saved); +} + +static void atom_op_mul(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t attr = U8((*ptr)++); + uint32_t dst, src; + SDEBUG(" src1: "); + dst = atom_get_dst(ctx, arg, attr, ptr, NULL, 1); + SDEBUG(" src2: "); + src = atom_get_src(ctx, attr, ptr); + ctx->ctx->divmul[0] = dst*src; +} + +static void atom_op_nop(atom_exec_context *ctx, int *ptr, int arg) +{ + /* nothing */ +} + +static void atom_op_or(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t attr = U8((*ptr)++); + uint32_t dst, src, saved; + int dptr = *ptr; + SDEBUG(" dst: "); + dst = atom_get_dst(ctx, arg, attr, ptr, &saved, 1); + SDEBUG(" src: "); + src = atom_get_src(ctx, attr, ptr); + dst |= src; + SDEBUG(" dst: "); + atom_put_dst(ctx, arg, attr, &dptr, dst, saved); +} + +static void atom_op_postcard(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t val = U8((*ptr)++); + SDEBUG("POST card output: 0x%02X\n", val); +} + +static void atom_op_repeat(atom_exec_context *ctx, int *ptr, int arg) +{ + printk(KERN_INFO "unimplemented!\n"); +} + +static void atom_op_restorereg(atom_exec_context *ctx, int *ptr, int arg) +{ + printk(KERN_INFO "unimplemented!\n"); +} + +static void atom_op_savereg(atom_exec_context *ctx, int *ptr, int arg) +{ + printk(KERN_INFO "unimplemented!\n"); +} + +static void atom_op_setdatablock(atom_exec_context *ctx, int *ptr, int arg) +{ + int idx = U8(*ptr); + (*ptr)++; + SDEBUG(" block: %d\n", idx); + if(!idx) + ctx->ctx->data_block = 0; + else if(idx==255) + ctx->ctx->data_block = ctx->start; + else + ctx->ctx->data_block = U16(ctx->ctx->data_table + 4 + 2*idx); + SDEBUG(" base: 0x%04X\n", ctx->ctx->data_block); +} + +static void atom_op_setfbbase(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t attr = U8((*ptr)++); + SDEBUG(" fb_base: "); + ctx->ctx->fb_base = atom_get_src(ctx, attr, ptr); +} + +static void atom_op_setport(atom_exec_context *ctx, int *ptr, int arg) +{ + int port; + switch(arg) { + case ATOM_PORT_ATI: + port = U16(*ptr); + if(port < ATOM_IO_NAMES_CNT) + SDEBUG(" port: %d (%s)\n", port, atom_io_names[port]); + else + SDEBUG(" port: %d\n", port); + if(!port) + ctx->ctx->io_mode = ATOM_IO_MM; + else + ctx->ctx->io_mode = ATOM_IO_IIO|port; + (*ptr)+=2; + break; + case ATOM_PORT_PCI: + ctx->ctx->io_mode = ATOM_IO_PCI; + (*ptr)++; + break; + case ATOM_PORT_SYSIO: + ctx->ctx->io_mode = ATOM_IO_SYSIO; + (*ptr)++; + break; + } +} + +static void atom_op_setregblock(atom_exec_context *ctx, int *ptr, int arg) +{ + ctx->ctx->reg_block = U16(*ptr); + (*ptr)+=2; + SDEBUG(" base: 0x%04X\n", ctx->ctx->reg_block); +} + +static void atom_op_shl(atom_exec_context *ctx, int *ptr, int arg) +{ + uint8_t attr = U8((*ptr)++), shift; + uint32_t saved, dst; + int dptr = *ptr; + attr &= 0x38; + attr |= atom_def_dst[attr>>3]<<6; + SDEBUG(" dst: "); + dst = atom_get_dst(ctx, arg, attr, ptr, &saved, 1); + shift = U8((*ptr)++); + SDEBUG(" shift: %d\n", shift); + dst <<= shift; + SDEBUG(" dst: "); + ... [truncated message content] |
From: Jesse B. <jb...@vi...> - 2008-10-30 22:08:14
|
This commit adds the core mode setting routines for use by DRM drivers to manage outputs and displays. Originally based on the X.Org Randr 1.2 implementation, the code has since been heavily changed by Dave Airlie with contributions by Jesse Barnes, Jakob Bornecrantz and others. This one should probably be split up a bit; I think the TTM stuff in particular could be factored out fairly easily. diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c index 738fd0f..31ce044 100644 --- a/arch/x86/mm/pat.c +++ b/arch/x86/mm/pat.c @@ -11,6 +11,7 @@ #include <linux/bootmem.h> #include <linux/debugfs.h> #include <linux/kernel.h> +#include <linux/module.h> #include <linux/gfp.h> #include <linux/mm.h> #include <linux/fs.h> @@ -29,6 +30,7 @@ #ifdef CONFIG_X86_PAT int __read_mostly pat_enabled = 1; +EXPORT_SYMBOL_GPL(pat_enabled); void __cpuinit pat_disable(char *reason) { diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index a8b33c2..6723182 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -41,6 +41,14 @@ config DRM_RADEON If M is selected, the module will be called radeon. +config DRM_RADEON_KMS + bool "Enable modesetting on radeon by default" + depends on DRM_RADEON + help + Choose this option if you want kernel modesetting enabled by default, + and you have a new enough userspace to support this. Running old + userspaces with this enabled will cause pain. + config DRM_I810 tristate "Intel I810" depends on DRM && AGP && AGP_INTEL @@ -76,6 +84,15 @@ config DRM_I915 endchoice +config DRM_I915_KMS + bool "Enable modesetting on intel by default" + depends on DRM_I915 + help + Choose this option if you want kernel modesetting enabled by default, + and you have a new enough userspace to support this. Running old + userspaces with this enabled will cause pain. + + config DRM_MGA tristate "Matrox g200/g400" depends on DRM diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 74da994..48567a9 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -9,7 +9,9 @@ drm-y := drm_auth.o drm_bufs.o drm_cache.o \ drm_drv.o drm_fops.o drm_gem.o drm_ioctl.o drm_irq.o \ drm_lock.o drm_memory.o drm_proc.o drm_stub.o drm_vm.o \ drm_agpsupport.o drm_scatter.o ati_pcigart.o drm_pci.o \ - drm_sysfs.o drm_hashtab.o drm_sman.o drm_mm.o + drm_sysfs.o drm_hashtab.o drm_sman.o drm_mm.o \ + drm_fence.o drm_bo.o drm_ttm.o drm_bo_move.o \ + drm_crtc.o drm_crtc_helper.o drm_modes.o drm_edid.o drm-$(CONFIG_COMPAT) += drm_ioc32.o diff --git a/drivers/gpu/drm/ati_pcigart.c b/drivers/gpu/drm/ati_pcigart.c index c533d0c..adc57dd 100644 --- a/drivers/gpu/drm/ati_pcigart.c +++ b/drivers/gpu/drm/ati_pcigart.c @@ -34,9 +34,55 @@ #include "drmP.h" # define ATI_PCIGART_PAGE_SIZE 4096 /**< PCI GART page size */ +# define ATI_PCIGART_PAGE_MASK (~(ATI_PCIGART_PAGE_SIZE-1)) -static int drm_ati_alloc_pcigart_table(struct drm_device *dev, - struct drm_ati_pcigart_info *gart_info) +#define ATI_PCIE_WRITE 0x4 +#define ATI_PCIE_READ 0x8 + +static __inline__ void gart_insert_page_into_table(struct drm_ati_pcigart_info *gart_info, dma_addr_t addr, volatile u32 *pci_gart) +{ + u32 page_base; + + page_base = (u32)addr & ATI_PCIGART_PAGE_MASK; + switch(gart_info->gart_reg_if) { + case DRM_ATI_GART_IGP: + page_base |= (upper_32_bits(addr) & 0xff) << 4; + page_base |= 0xc; + break; + case DRM_ATI_GART_PCIE: + page_base >>= 8; + page_base |= (upper_32_bits(addr) & 0xff) << 24; + page_base |= ATI_PCIE_READ | ATI_PCIE_WRITE; + break; + default: + case DRM_ATI_GART_PCI: + break; + } + *pci_gart = cpu_to_le32(page_base); +} + +static __inline__ dma_addr_t gart_get_page_from_table(struct drm_ati_pcigart_info *gart_info, volatile u32 *pci_gart) +{ + dma_addr_t retval; + switch(gart_info->gart_reg_if) { + case DRM_ATI_GART_IGP: + retval = (*pci_gart & ATI_PCIGART_PAGE_MASK); + retval += (((*pci_gart & 0xf0) >> 4) << 16) << 16; + break; + case DRM_ATI_GART_PCIE: + retval = (*pci_gart & ~0xc); + retval <<= 8; + break; + case DRM_ATI_GART_PCI: + retval = *pci_gart; + break; + } + + return retval; +} + +int drm_ati_alloc_pcigart_table(struct drm_device *dev, + struct drm_ati_pcigart_info *gart_info) { gart_info->table_handle = drm_pci_alloc(dev, gart_info->table_size, PAGE_SIZE, @@ -44,12 +90,25 @@ static int drm_ati_alloc_pcigart_table(struct drm_device *dev, if (gart_info->table_handle == NULL) return -ENOMEM; +#ifdef CONFIG_X86 + /* IGPs only exist on x86 in any case */ + if (gart_info->gart_reg_if == DRM_ATI_GART_IGP) + set_memory_uc((unsigned long)gart_info->table_handle->vaddr, gart_info->table_size >> PAGE_SHIFT); +#endif + + memset(gart_info->table_handle->vaddr, 0, gart_info->table_size); return 0; } +EXPORT_SYMBOL(drm_ati_alloc_pcigart_table); static void drm_ati_free_pcigart_table(struct drm_device *dev, struct drm_ati_pcigart_info *gart_info) { +#ifdef CONFIG_X86 + /* IGPs only exist on x86 in any case */ + if (gart_info->gart_reg_if == DRM_ATI_GART_IGP) + set_memory_wb((unsigned long)gart_info->table_handle->vaddr, gart_info->table_size >> PAGE_SHIFT); +#endif drm_pci_free(dev, gart_info->table_handle); gart_info->table_handle = NULL; } @@ -63,7 +122,6 @@ int drm_ati_pcigart_cleanup(struct drm_device *dev, struct drm_ati_pcigart_info /* we need to support large memory configurations */ if (!entry) { - DRM_ERROR("no scatter/gather memory!\n"); return 0; } @@ -98,17 +156,14 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga struct drm_sg_mem *entry = dev->sg; void *address = NULL; unsigned long pages; - u32 *pci_gart, page_base; + u32 *pci_gart; dma_addr_t bus_address = 0; int i, j, ret = 0; int max_pages; + dma_addr_t entry_addr; - if (!entry) { - DRM_ERROR("no scatter/gather memory!\n"); - goto done; - } - if (gart_info->gart_table_location == DRM_ATI_GART_MAIN) { + if (gart_info->gart_table_location == DRM_ATI_GART_MAIN && gart_info->table_handle == NULL) { DRM_DEBUG("PCI: no table in VRAM: using normal RAM\n"); ret = drm_ati_alloc_pcigart_table(dev, gart_info); @@ -116,15 +171,19 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga DRM_ERROR("cannot allocate PCI GART page!\n"); goto done; } + } + if (gart_info->gart_table_location == DRM_ATI_GART_MAIN) { address = gart_info->table_handle->vaddr; bus_address = gart_info->table_handle->busaddr; } else { address = gart_info->addr; bus_address = gart_info->bus_addr; - DRM_DEBUG("PCI: Gart Table: VRAM %08LX mapped at %08lX\n", - (unsigned long long)bus_address, - (unsigned long)address); + } + + if (!entry) { + DRM_ERROR("no scatter/gather memory!\n"); + goto done; } pci_gart = (u32 *) address; @@ -133,8 +192,6 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga pages = (entry->pages <= max_pages) ? entry->pages : max_pages; - memset(pci_gart, 0, max_pages * sizeof(u32)); - for (i = 0; i < pages; i++) { /* we need to support large memory configurations */ entry->busaddr[i] = pci_map_page(dev->pdev, entry->pagelist[i], @@ -146,32 +203,18 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga bus_address = 0; goto done; } - page_base = (u32) entry->busaddr[i]; + entry_addr = entry->busaddr[i]; for (j = 0; j < (PAGE_SIZE / ATI_PCIGART_PAGE_SIZE); j++) { - switch(gart_info->gart_reg_if) { - case DRM_ATI_GART_IGP: - *pci_gart = cpu_to_le32((page_base) | 0xc); - break; - case DRM_ATI_GART_PCIE: - *pci_gart = cpu_to_le32((page_base >> 8) | 0xc); - break; - default: - case DRM_ATI_GART_PCI: - *pci_gart = cpu_to_le32(page_base); - break; - } + gart_insert_page_into_table(gart_info, entry_addr, pci_gart); pci_gart++; - page_base += ATI_PCIGART_PAGE_SIZE; + entry_addr += ATI_PCIGART_PAGE_SIZE; } } + ret = 1; -#if defined(__i386__) || defined(__x86_64__) - wbinvd(); -#else mb(); -#endif done: gart_info->addr = address; @@ -179,3 +222,142 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga return ret; } EXPORT_SYMBOL(drm_ati_pcigart_init); + +static int ati_pcigart_needs_unbind_cache_adjust(struct drm_ttm_backend *backend) +{ + return ((backend->flags & DRM_BE_FLAG_BOUND_CACHED) ? 0 : 1); +} + +static int ati_pcigart_populate(struct drm_ttm_backend *backend, + unsigned long num_pages, + struct page **pages, + struct page *dummy_read_page) +{ + struct ati_pcigart_ttm_backend *atipci_be = + container_of(backend, struct ati_pcigart_ttm_backend, backend); + + atipci_be->pages = pages; + atipci_be->num_pages = num_pages; + atipci_be->populated = 1; + return 0; +} + +static int ati_pcigart_bind_ttm(struct drm_ttm_backend *backend, + struct drm_bo_mem_reg *bo_mem) +{ + struct ati_pcigart_ttm_backend *atipci_be = + container_of(backend, struct ati_pcigart_ttm_backend, backend); + off_t j; + int i; + struct drm_ati_pcigart_info *info = atipci_be->gart_info; + volatile u32 *pci_gart; + dma_addr_t offset = bo_mem->mm_node->start; + dma_addr_t page_base; + + pci_gart = info->addr; + + j = offset; + while (j < (offset + atipci_be->num_pages)) { + if (gart_get_page_from_table(info, pci_gart + j)) + return -EBUSY; + j++; + } + + for (i = 0, j = offset; i < atipci_be->num_pages; i++, j++) { + struct page *cur_page = atipci_be->pages[i]; + /* write value */ + page_base = page_to_phys(cur_page); + gart_insert_page_into_table(info, page_base, pci_gart + j); + } + + mb(); + atipci_be->gart_flush_fn(atipci_be->dev); + + atipci_be->bound = 1; + atipci_be->offset = offset; + /* need to traverse table and add entries */ + DRM_DEBUG("\n"); + return 0; +} + +static int ati_pcigart_unbind_ttm(struct drm_ttm_backend *backend) +{ + struct ati_pcigart_ttm_backend *atipci_be = + container_of(backend, struct ati_pcigart_ttm_backend, backend); + struct drm_ati_pcigart_info *info = atipci_be->gart_info; + unsigned long offset = atipci_be->offset; + int i; + off_t j; + volatile u32 *pci_gart = info->addr; + + if (atipci_be->bound != 1) + return -EINVAL; + + for (i = 0, j = offset; i < atipci_be->num_pages; i++, j++) { + *(pci_gart + j) = 0; + } + + mb(); + atipci_be->gart_flush_fn(atipci_be->dev); + atipci_be->bound = 0; + atipci_be->offset = 0; + return 0; +} + +static void ati_pcigart_clear_ttm(struct drm_ttm_backend *backend) +{ + struct ati_pcigart_ttm_backend *atipci_be = + container_of(backend, struct ati_pcigart_ttm_backend, backend); + + DRM_DEBUG("\n"); + if (atipci_be->pages) { + backend->func->unbind(backend); + atipci_be->pages = NULL; + + } + atipci_be->num_pages = 0; +} + +static void ati_pcigart_destroy_ttm(struct drm_ttm_backend *backend) +{ + struct ati_pcigart_ttm_backend *atipci_be; + if (backend) { + DRM_DEBUG("\n"); + atipci_be = container_of(backend, struct ati_pcigart_ttm_backend, backend); + if (atipci_be) { + if (atipci_be->pages) { + backend->func->clear(backend); + } + drm_ctl_free(atipci_be, sizeof(*atipci_be), DRM_MEM_TTM); + } + } +} + +static struct drm_ttm_backend_func ati_pcigart_ttm_backend = +{ + .needs_ub_cache_adjust = ati_pcigart_needs_unbind_cache_adjust, + .populate = ati_pcigart_populate, + .clear = ati_pcigart_clear_ttm, + .bind = ati_pcigart_bind_ttm, + .unbind = ati_pcigart_unbind_ttm, + .destroy = ati_pcigart_destroy_ttm, +}; + +struct drm_ttm_backend *ati_pcigart_init_ttm(struct drm_device *dev, struct drm_ati_pcigart_info *info, void (*gart_flush_fn)(struct drm_device *dev)) +{ + struct ati_pcigart_ttm_backend *atipci_be; + + atipci_be = drm_ctl_calloc(1, sizeof (*atipci_be), DRM_MEM_TTM); + if (!atipci_be) + return NULL; + + atipci_be->populated = 0; + atipci_be->backend.func = &ati_pcigart_ttm_backend; +// atipci_be->backend.mem_type = DRM_BO_MEM_TT; + atipci_be->gart_info = info; + atipci_be->gart_flush_fn = gart_flush_fn; + atipci_be->dev = dev; + + return &atipci_be->backend; +} +EXPORT_SYMBOL(ati_pcigart_init_ttm); diff --git a/drivers/gpu/drm/drm_agpsupport.c b/drivers/gpu/drm/drm_agpsupport.c index 3d33b82..e048aa2 100644 --- a/drivers/gpu/drm/drm_agpsupport.c +++ b/drivers/gpu/drm/drm_agpsupport.c @@ -496,6 +496,177 @@ drm_agp_bind_pages(struct drm_device *dev, } EXPORT_SYMBOL(drm_agp_bind_pages); +/* + * AGP ttm backend interface. + */ + +#ifndef AGP_USER_TYPES +#define AGP_USER_TYPES (1 << 16) +#define AGP_USER_MEMORY (AGP_USER_TYPES) +#define AGP_USER_CACHED_MEMORY (AGP_USER_TYPES + 1) +#endif +#define AGP_REQUIRED_MAJOR 0 +#define AGP_REQUIRED_MINOR 102 + +static int drm_agp_needs_unbind_cache_adjust(struct drm_ttm_backend *backend) +{ + return ((backend->flags & DRM_BE_FLAG_BOUND_CACHED) ? 0 : 1); +} + + +static int drm_agp_populate(struct drm_ttm_backend *backend, + unsigned long num_pages, struct page **pages, + struct page *dummy_read_page) +{ + struct drm_agp_ttm_backend *agp_be = + container_of(backend, struct drm_agp_ttm_backend, backend); + struct page **cur_page, **last_page = pages + num_pages; + DRM_AGP_MEM *mem; + int dummy_page_count = 0; + + if (drm_alloc_memctl(num_pages * sizeof(void *))) + return -1; + + DRM_DEBUG("drm_agp_populate_ttm\n"); + mem = drm_agp_allocate_memory(agp_be->bridge, num_pages, AGP_USER_MEMORY); + if (!mem) { + drm_free_memctl(num_pages * sizeof(void *)); + return -1; + } + + DRM_DEBUG("Current page count is %ld\n", (long) mem->page_count); + mem->page_count = 0; + for (cur_page = pages; cur_page < last_page; ++cur_page) { + struct page *page = *cur_page; + if (!page) { + page = dummy_read_page; + ++dummy_page_count; + } + mem->memory[mem->page_count++] = phys_to_gart(page_to_phys(page)); + } + if (dummy_page_count) + DRM_DEBUG("Mapped %d dummy pages\n", dummy_page_count); + agp_be->mem = mem; + return 0; +} + +static int drm_agp_bind_ttm(struct drm_ttm_backend *backend, + struct drm_bo_mem_reg *bo_mem) +{ + struct drm_agp_ttm_backend *agp_be = + container_of(backend, struct drm_agp_ttm_backend, backend); + DRM_AGP_MEM *mem = agp_be->mem; + int ret; + int snooped = (bo_mem->flags & DRM_BO_FLAG_CACHED) && !(bo_mem->flags & DRM_BO_FLAG_CACHED_MAPPED); + + DRM_DEBUG("drm_agp_bind_ttm\n"); + mem->is_flushed = true; + mem->type = AGP_USER_MEMORY; + /* CACHED MAPPED implies not snooped memory */ + if (snooped) + mem->type = AGP_USER_CACHED_MEMORY; + + ret = drm_agp_bind_memory(mem, bo_mem->mm_node->start); + if (ret) + DRM_ERROR("AGP Bind memory failed\n"); + + DRM_FLAG_MASKED(backend->flags, (bo_mem->flags & DRM_BO_FLAG_CACHED) ? + DRM_BE_FLAG_BOUND_CACHED : 0, + DRM_BE_FLAG_BOUND_CACHED); + return ret; +} + +static int drm_agp_unbind_ttm(struct drm_ttm_backend *backend) +{ + struct drm_agp_ttm_backend *agp_be = + container_of(backend, struct drm_agp_ttm_backend, backend); + + DRM_DEBUG("drm_agp_unbind_ttm\n"); + if (agp_be->mem->is_bound) + return drm_agp_unbind_memory(agp_be->mem); + else + return 0; +} + +static void drm_agp_clear_ttm(struct drm_ttm_backend *backend) +{ + struct drm_agp_ttm_backend *agp_be = + container_of(backend, struct drm_agp_ttm_backend, backend); + DRM_AGP_MEM *mem = agp_be->mem; + + DRM_DEBUG("drm_agp_clear_ttm\n"); + if (mem) { + unsigned long num_pages = mem->page_count; + backend->func->unbind(backend); + agp_free_memory(mem); + drm_free_memctl(num_pages * sizeof(void *)); + } + agp_be->mem = NULL; +} + +static void drm_agp_destroy_ttm(struct drm_ttm_backend *backend) +{ + struct drm_agp_ttm_backend *agp_be; + + if (backend) { + DRM_DEBUG("drm_agp_destroy_ttm\n"); + agp_be = container_of(backend, struct drm_agp_ttm_backend, backend); + if (agp_be) { + if (agp_be->mem) + backend->func->clear(backend); + drm_ctl_free(agp_be, sizeof(*agp_be), DRM_MEM_TTM); + } + } +} + +static struct drm_ttm_backend_func agp_ttm_backend = { + .needs_ub_cache_adjust = drm_agp_needs_unbind_cache_adjust, + .populate = drm_agp_populate, + .clear = drm_agp_clear_ttm, + .bind = drm_agp_bind_ttm, + .unbind = drm_agp_unbind_ttm, + .destroy = drm_agp_destroy_ttm, +}; + +struct drm_ttm_backend *drm_agp_init_ttm(struct drm_device *dev) +{ + + struct drm_agp_ttm_backend *agp_be; + struct agp_kern_info *info; + + if (!dev->agp) { + DRM_ERROR("AGP is not initialized.\n"); + return NULL; + } + info = &dev->agp->agp_info; + + if (info->version.major != AGP_REQUIRED_MAJOR || + info->version.minor < AGP_REQUIRED_MINOR) { + DRM_ERROR("Wrong agpgart version %d.%d\n" + "\tYou need at least version %d.%d.\n", + info->version.major, + info->version.minor, + AGP_REQUIRED_MAJOR, + AGP_REQUIRED_MINOR); + return NULL; + } + + + agp_be = drm_ctl_calloc(1, sizeof(*agp_be), DRM_MEM_TTM); + if (!agp_be) + return NULL; + + agp_be->mem = NULL; + + agp_be->bridge = dev->agp->bridge; + agp_be->populated = false; + agp_be->backend.func = &agp_ttm_backend; + agp_be->backend.dev = dev; + + return &agp_be->backend; +} +EXPORT_SYMBOL(drm_agp_init_ttm); + void drm_agp_chipset_flush(struct drm_device *dev) { agp_flush_chipset(dev->agp->bridge); diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c index a734627..ca7a9ef 100644 --- a/drivers/gpu/drm/drm_auth.c +++ b/drivers/gpu/drm/drm_auth.c @@ -45,14 +45,15 @@ * the one with matching magic number, while holding the drm_device::struct_mutex * lock. */ -static struct drm_file *drm_find_file(struct drm_device * dev, drm_magic_t magic) +static struct drm_file *drm_find_file(struct drm_master *master, drm_magic_t magic) { struct drm_file *retval = NULL; struct drm_magic_entry *pt; struct drm_hash_item *hash; + struct drm_device *dev = master->minor->dev; mutex_lock(&dev->struct_mutex); - if (!drm_ht_find_item(&dev->magiclist, (unsigned long)magic, &hash)) { + if (!drm_ht_find_item(&master->magiclist, (unsigned long)magic, &hash)) { pt = drm_hash_entry(hash, struct drm_magic_entry, hash_item); retval = pt->priv; } @@ -71,11 +72,11 @@ static struct drm_file *drm_find_file(struct drm_device * dev, drm_magic_t magic * associated the magic number hash key in drm_device::magiclist, while holding * the drm_device::struct_mutex lock. */ -static int drm_add_magic(struct drm_device * dev, struct drm_file * priv, +static int drm_add_magic(struct drm_master *master, struct drm_file *priv, drm_magic_t magic) { struct drm_magic_entry *entry; - + struct drm_device *dev = master->minor->dev; DRM_DEBUG("%d\n", magic); entry = drm_alloc(sizeof(*entry), DRM_MEM_MAGIC); @@ -83,11 +84,10 @@ static int drm_add_magic(struct drm_device * dev, struct drm_file * priv, return -ENOMEM; memset(entry, 0, sizeof(*entry)); entry->priv = priv; - entry->hash_item.key = (unsigned long)magic; mutex_lock(&dev->struct_mutex); - drm_ht_insert_item(&dev->magiclist, &entry->hash_item); - list_add_tail(&entry->head, &dev->magicfree); + drm_ht_insert_item(&master->magiclist, &entry->hash_item); + list_add_tail(&entry->head, &master->magicfree); mutex_unlock(&dev->struct_mutex); return 0; @@ -102,20 +102,21 @@ static int drm_add_magic(struct drm_device * dev, struct drm_file * priv, * Searches and unlinks the entry in drm_device::magiclist with the magic * number hash key, while holding the drm_device::struct_mutex lock. */ -static int drm_remove_magic(struct drm_device * dev, drm_magic_t magic) +static int drm_remove_magic(struct drm_master *master, drm_magic_t magic) { struct drm_magic_entry *pt; struct drm_hash_item *hash; + struct drm_device *dev = master->minor->dev; DRM_DEBUG("%d\n", magic); mutex_lock(&dev->struct_mutex); - if (drm_ht_find_item(&dev->magiclist, (unsigned long)magic, &hash)) { + if (drm_ht_find_item(&master->magiclist, (unsigned long)magic, &hash)) { mutex_unlock(&dev->struct_mutex); return -EINVAL; } pt = drm_hash_entry(hash, struct drm_magic_entry, hash_item); - drm_ht_remove_item(&dev->magiclist, hash); + drm_ht_remove_item(&master->magiclist, hash); list_del(&pt->head); mutex_unlock(&dev->struct_mutex); @@ -153,9 +154,9 @@ int drm_getmagic(struct drm_device *dev, void *data, struct drm_file *file_priv) ++sequence; /* reserve 0 */ auth->magic = sequence++; spin_unlock(&lock); - } while (drm_find_file(dev, auth->magic)); + } while (drm_find_file(file_priv->master, auth->magic)); file_priv->magic = auth->magic; - drm_add_magic(dev, file_priv, auth->magic); + drm_add_magic(file_priv->master, file_priv, auth->magic); } DRM_DEBUG("%u\n", auth->magic); @@ -181,9 +182,9 @@ int drm_authmagic(struct drm_device *dev, void *data, struct drm_file *file; DRM_DEBUG("%u\n", auth->magic); - if ((file = drm_find_file(dev, auth->magic))) { + if ((file = drm_find_file(file_priv->master, auth->magic))) { file->authenticated = 1; - drm_remove_magic(dev, auth->magic); + drm_remove_magic(file_priv->master, auth->magic); return 0; } return -EINVAL; diff --git a/drivers/gpu/drm/drm_bo.c b/drivers/gpu/drm/drm_bo.c new file mode 100644 index 0000000..5cec5a0 --- /dev/null +++ b/drivers/gpu/drm/drm_bo.c @@ -0,0 +1,2116 @@ +/************************************************************************** + * + * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstr�m <thomas-at-tungstengraphics-dot-com> + */ + +#include "drmP.h" + +/* + * Locking may look a bit complicated but isn't really: + * + * The buffer usage atomic_t needs to be protected by dev->struct_mutex + * when there is a chance that it can be zero before or after the operation. + * + * dev->struct_mutex also protects all lists and list heads, + * Hash tables and hash heads. + * + * bo->mutex protects the buffer object itself excluding the usage field. + * bo->mutex does also protect the buffer list heads, so to manipulate those, + * we need both the bo->mutex and the dev->struct_mutex. + * + * Locking order is bo->mutex, dev->struct_mutex. Therefore list traversal + * is a bit complicated. When dev->struct_mutex is released to grab bo->mutex, + * the list traversal will, in general, need to be restarted. + * + */ + +static void drm_bo_destroy_locked(struct drm_buffer_object *bo); +static int drm_bo_setup_vm_locked(struct drm_buffer_object *bo); +static void drm_bo_unmap_virtual(struct drm_buffer_object *bo); + +static inline uint64_t drm_bo_type_flags(unsigned type) +{ + return (1ULL << (24 + type)); +} + +/* + * bo locked. dev->struct_mutex locked. + */ + +void drm_bo_add_to_pinned_lru(struct drm_buffer_object *bo) +{ + struct drm_mem_type_manager *man; + + DRM_ASSERT_LOCKED(&bo->dev->struct_mutex); + DRM_ASSERT_LOCKED(&bo->mutex); + + man = &bo->dev->bm.man[bo->pinned_mem_type]; + list_add_tail(&bo->pinned_lru, &man->pinned); +} + +void drm_bo_add_to_lru(struct drm_buffer_object *bo) +{ + struct drm_mem_type_manager *man; + + DRM_ASSERT_LOCKED(&bo->dev->struct_mutex); + + if (!(bo->mem.proposed_flags & (DRM_BO_FLAG_NO_MOVE | DRM_BO_FLAG_NO_EVICT)) + || bo->mem.mem_type != bo->pinned_mem_type) { + man = &bo->dev->bm.man[bo->mem.mem_type]; + list_add_tail(&bo->lru, &man->lru); + } else { + INIT_LIST_HEAD(&bo->lru); + } +} + +static int drm_bo_vm_pre_move(struct drm_buffer_object *bo, int old_is_pci) +{ +#ifdef DRM_ODD_MM_COMPAT + int ret; + + if (!bo->map_list.map) + return 0; + + ret = drm_bo_lock_kmm(bo); + if (ret) + return ret; + drm_bo_unmap_virtual(bo); + if (old_is_pci) + drm_bo_finish_unmap(bo); +#else + if (!bo->map_list.map) + return 0; + + drm_bo_unmap_virtual(bo); +#endif + return 0; +} + +static void drm_bo_vm_post_move(struct drm_buffer_object *bo) +{ +#ifdef DRM_ODD_MM_COMPAT + int ret; + + if (!bo->map_list.map) + return; + + ret = drm_bo_remap_bound(bo); + if (ret) { + DRM_ERROR("Failed to remap a bound buffer object.\n" + "\tThis might cause a sigbus later.\n"); + } + drm_bo_unlock_kmm(bo); +#endif +} + +/* + * Call bo->mutex locked. + */ + +int drm_bo_add_ttm(struct drm_buffer_object *bo) +{ + struct drm_device *dev = bo->dev; + int ret = 0; + uint32_t page_flags = 0; + + DRM_ASSERT_LOCKED(&bo->mutex); + bo->ttm = NULL; + + if (bo->mem.proposed_flags & DRM_BO_FLAG_WRITE) + page_flags |= DRM_TTM_PAGE_WRITE; + + switch (bo->type) { + case drm_bo_type_device: + case drm_bo_type_kernel: + bo->ttm = drm_ttm_create(dev, bo->num_pages << PAGE_SHIFT, + page_flags, dev->bm.dummy_read_page); + if (!bo->ttm) + ret = -ENOMEM; + break; + case drm_bo_type_user: + bo->ttm = drm_ttm_create(dev, bo->num_pages << PAGE_SHIFT, + page_flags | DRM_TTM_PAGE_USER, + dev->bm.dummy_read_page); + if (!bo->ttm) + ret = -ENOMEM; + + ret = drm_ttm_set_user(bo->ttm, current, + bo->buffer_start, + bo->num_pages); + if (ret) + return ret; + + break; + default: + DRM_ERROR("Illegal buffer object type\n"); + ret = -EINVAL; + break; + } + + return ret; +} +EXPORT_SYMBOL(drm_bo_add_ttm); + +static int drm_bo_handle_move_mem(struct drm_buffer_object *bo, + struct drm_bo_mem_reg *mem, + int evict, int no_wait) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + int old_is_pci = drm_mem_reg_is_pci(dev, &bo->mem); + int new_is_pci = drm_mem_reg_is_pci(dev, mem); + struct drm_mem_type_manager *old_man = &bm->man[bo->mem.mem_type]; + struct drm_mem_type_manager *new_man = &bm->man[mem->mem_type]; + int ret = 0; + + if (old_is_pci || new_is_pci || + ((mem->flags ^ bo->mem.flags) & DRM_BO_FLAG_CACHED)) + ret = drm_bo_vm_pre_move(bo, old_is_pci); + if (ret) + return ret; + + /* + * Create and bind a ttm if required. + */ + + if (!(new_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && (bo->ttm == NULL)) { + ret = drm_bo_add_ttm(bo); + if (ret) + goto out_err; + + if (mem->mem_type != DRM_BO_MEM_LOCAL) { + ret = drm_ttm_bind(bo->ttm, mem); + if (ret) + goto out_err; + } + + if (bo->mem.mem_type == DRM_BO_MEM_LOCAL) { + + struct drm_bo_mem_reg *old_mem = &bo->mem; + uint64_t save_flags = old_mem->flags; + uint64_t save_proposed_flags = old_mem->proposed_flags; + + *old_mem = *mem; + mem->mm_node = NULL; + old_mem->proposed_flags = save_proposed_flags; + DRM_FLAG_MASKED(save_flags, mem->flags, + DRM_BO_MASK_MEMTYPE); + goto moved; + } + + } + + if (!(old_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && + !(new_man->flags & _DRM_FLAG_MEMTYPE_FIXED)) + ret = drm_bo_move_ttm(bo, evict, no_wait, mem); + else if (dev->driver->bo_driver->move) + ret = dev->driver->bo_driver->move(bo, evict, no_wait, mem); + else + ret = drm_bo_move_memcpy(bo, evict, no_wait, mem); + + if (ret) + goto out_err; + +moved: + if (old_is_pci || new_is_pci) + drm_bo_vm_post_move(bo); + + if (bo->priv_flags & _DRM_BO_FLAG_EVICTED) { + ret = + dev->driver->bo_driver->invalidate_caches(dev, + bo->mem.flags); + if (ret) + DRM_ERROR("Can not flush read caches\n"); + } + + DRM_FLAG_MASKED(bo->priv_flags, + (evict) ? _DRM_BO_FLAG_EVICTED : 0, + _DRM_BO_FLAG_EVICTED); + + if (bo->mem.mm_node) + bo->offset = (bo->mem.mm_node->start << PAGE_SHIFT) + + bm->man[bo->mem.mem_type].gpu_offset; + + + return 0; + +out_err: + if (old_is_pci || new_is_pci) + drm_bo_vm_post_move(bo); + + new_man = &bm->man[bo->mem.mem_type]; + if ((new_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && bo->ttm) { + drm_ttm_unbind(bo->ttm); + drm_ttm_destroy(bo->ttm); + bo->ttm = NULL; + } + + return ret; +} + +/* + * Call bo->mutex locked. + * Returns -EBUSY if the buffer is currently rendered to or from. 0 otherwise. + */ + +static int drm_bo_busy(struct drm_buffer_object *bo, int check_unfenced) +{ + struct drm_fence_object *fence = bo->fence; + + if (check_unfenced && (bo->priv_flags & _DRM_BO_FLAG_UNFENCED)) + return -EBUSY; + + if (fence) { + if (drm_fence_object_signaled(fence, bo->fence_type)) { + drm_fence_usage_deref_unlocked(&bo->fence); + return 0; + } + drm_fence_object_flush(fence, DRM_FENCE_TYPE_EXE); + if (drm_fence_object_signaled(fence, bo->fence_type)) { + drm_fence_usage_deref_unlocked(&bo->fence); + return 0; + } + return -EBUSY; + } + return 0; +} + +static int drm_bo_check_unfenced(struct drm_buffer_object *bo) +{ + int ret; + + mutex_lock(&bo->mutex); + ret = (bo->priv_flags & _DRM_BO_FLAG_UNFENCED); + mutex_unlock(&bo->mutex); + return ret; +} + + +/* + * Call bo->mutex locked. + * Wait until the buffer is idle. + */ + +int drm_bo_wait(struct drm_buffer_object *bo, int lazy, int interruptible, + int no_wait, int check_unfenced) +{ + int ret; + + DRM_ASSERT_LOCKED(&bo->mutex); + while(unlikely(drm_bo_busy(bo, check_unfenced))) { + if (no_wait) + return -EBUSY; + + if (check_unfenced && (bo->priv_flags & _DRM_BO_FLAG_UNFENCED)) { + mutex_unlock(&bo->mutex); + wait_event(bo->event_queue, !drm_bo_check_unfenced(bo)); + mutex_lock(&bo->mutex); + bo->priv_flags |= _DRM_BO_FLAG_UNLOCKED; + } + + if (bo->fence) { + struct drm_fence_object *fence; + uint32_t fence_type = bo->fence_type; + + drm_fence_reference_unlocked(&fence, bo->fence); + mutex_unlock(&bo->mutex); + + ret = drm_fence_object_wait(fence, lazy, !interruptible, + fence_type); + + drm_fence_usage_deref_unlocked(&fence); + mutex_lock(&bo->mutex); + bo->priv_flags |= _DRM_BO_FLAG_UNLOCKED; + if (ret) + return ret; + } + + } + return 0; +} +EXPORT_SYMBOL(drm_bo_wait); + +static int drm_bo_expire_fence(struct drm_buffer_object *bo, int allow_errors) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + + if (bo->fence) { + if (bm->nice_mode) { + unsigned long _end = jiffies + 3 * DRM_HZ; + int ret; + do { + ret = drm_bo_wait(bo, 0, 0, 0, 0); + if (ret && allow_errors) + return ret; + + } while (ret && !time_after_eq(jiffies, _end)); + + if (bo->fence) { + bm->nice_mode = 0; + DRM_ERROR("Detected GPU lockup or " + "fence driver was taken down. " + "Evicting buffer.\n"); + } + } + if (bo->fence) + drm_fence_usage_deref_unlocked(&bo->fence); + } + return 0; +} + +/* + * Call dev->struct_mutex locked. + * Attempts to remove all private references to a buffer by expiring its + * fence object and removing from lru lists and memory managers. + */ + +static void drm_bo_cleanup_refs(struct drm_buffer_object *bo, int remove_all) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + + DRM_ASSERT_LOCKED(&dev->struct_mutex); + + atomic_inc(&bo->usage); + mutex_unlock(&dev->struct_mutex); + mutex_lock(&bo->mutex); + + DRM_FLAG_MASKED(bo->priv_flags, 0, _DRM_BO_FLAG_UNFENCED); + + if (bo->fence && drm_fence_object_signaled(bo->fence, + bo->fence_type)) + drm_fence_usage_deref_unlocked(&bo->fence); + + if (bo->fence && remove_all) + (void)drm_bo_expire_fence(bo, 0); + + mutex_lock(&dev->struct_mutex); + + if (!atomic_dec_and_test(&bo->usage)) + goto out; + + if (!bo->fence) { + list_del_init(&bo->lru); + if (bo->mem.mm_node) { + drm_mm_put_block(bo->mem.mm_node); + if (bo->pinned_node == bo->mem.mm_node) + bo->pinned_node = NULL; + bo->mem.mm_node = NULL; + } + list_del_init(&bo->pinned_lru); + if (bo->pinned_node) { + drm_mm_put_block(bo->pinned_node); + bo->pinned_node = NULL; + } + list_del_init(&bo->ddestroy); + mutex_unlock(&bo->mutex); + drm_bo_destroy_locked(bo); + return; + } + + if (list_empty(&bo->ddestroy)) { + drm_fence_object_flush(bo->fence, bo->fence_type); + list_add_tail(&bo->ddestroy, &bm->ddestroy); + schedule_delayed_work(&bm->wq, + ((DRM_HZ / 100) < 1) ? 1 : DRM_HZ / 100); + } + +out: + mutex_unlock(&bo->mutex); + return; +} + +/* + * Verify that refcount is 0 and that there are no internal references + * to the buffer object. Then destroy it. + */ + +static void drm_bo_destroy_locked(struct drm_buffer_object *bo) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + + DRM_ASSERT_LOCKED(&dev->struct_mutex); + + DRM_DEBUG("freeing %p\n", bo); + if (list_empty(&bo->lru) && bo->mem.mm_node == NULL && + list_empty(&bo->pinned_lru) && bo->pinned_node == NULL && + list_empty(&bo->ddestroy) && atomic_read(&bo->usage) == 0) { + if (bo->fence != NULL) { + DRM_ERROR("Fence was non-zero.\n"); + drm_bo_cleanup_refs(bo, 0); + return; + } + +#ifdef DRM_ODD_MM_COMPAT + BUG_ON(!list_empty(&bo->vma_list)); + BUG_ON(!list_empty(&bo->p_mm_list)); +#endif + + if (bo->ttm) { + drm_ttm_unbind(bo->ttm); + drm_ttm_destroy(bo->ttm); + bo->ttm = NULL; + } + + atomic_dec(&bm->count); + + drm_ctl_free(bo, sizeof(*bo), DRM_MEM_BUFOBJ); + + return; + } + + /* + * Some stuff is still trying to reference the buffer object. + * Get rid of those references. + */ + + drm_bo_cleanup_refs(bo, 0); + + return; +} + +/* + * Call dev->struct_mutex locked. + */ + +static void drm_bo_delayed_delete(struct drm_device *dev, int remove_all) +{ + struct drm_buffer_manager *bm = &dev->bm; + + struct drm_buffer_object *entry, *nentry; + struct list_head *list, *next; + + list_for_each_safe(list, next, &bm->ddestroy) { + entry = list_entry(list, struct drm_buffer_object, ddestroy); + + nentry = NULL; + DRM_DEBUG("bo is %p, %d\n", entry, entry->num_pages); + if (next != &bm->ddestroy) { + nentry = list_entry(next, struct drm_buffer_object, + ddestroy); + atomic_inc(&nentry->usage); + } + + drm_bo_cleanup_refs(entry, remove_all); + + if (nentry) + atomic_dec(&nentry->usage); + } +} + +static void drm_bo_delayed_workqueue(struct work_struct *work) +{ + struct drm_buffer_manager *bm = + container_of(work, struct drm_buffer_manager, wq.work); + struct drm_device *dev = container_of(bm, struct drm_device, bm); + + DRM_DEBUG("Delayed delete Worker\n"); + + mutex_lock(&dev->struct_mutex); + if (!bm->initialized) { + mutex_unlock(&dev->struct_mutex); + return; + } + drm_bo_delayed_delete(dev, 0); + if (bm->initialized && !list_empty(&bm->ddestroy)) { + schedule_delayed_work(&bm->wq, + ((DRM_HZ / 100) < 1) ? 1 : DRM_HZ / 100); + } + mutex_unlock(&dev->struct_mutex); +} + +void drm_bo_usage_deref_locked(struct drm_buffer_object **bo) +{ + struct drm_buffer_object *tmp_bo = *bo; + bo = NULL; + + DRM_ASSERT_LOCKED(&tmp_bo->dev->struct_mutex); + + if (atomic_dec_and_test(&tmp_bo->usage)) + drm_bo_destroy_locked(tmp_bo); +} +EXPORT_SYMBOL(drm_bo_usage_deref_locked); + +void drm_bo_usage_deref_unlocked(struct drm_buffer_object **bo) +{ + struct drm_buffer_object *tmp_bo = *bo; + struct drm_device *dev = tmp_bo->dev; + + *bo = NULL; + if (atomic_dec_and_test(&tmp_bo->usage)) { + mutex_lock(&dev->struct_mutex); + if (atomic_read(&tmp_bo->usage) == 0) + drm_bo_destroy_locked(tmp_bo); + mutex_unlock(&dev->struct_mutex); + } +} +EXPORT_SYMBOL(drm_bo_usage_deref_unlocked); + +void drm_putback_buffer_objects(struct drm_device *dev) +{ + struct drm_buffer_manager *bm = &dev->bm; + struct list_head *list = &bm->unfenced; + struct drm_buffer_object *entry, *next; + + mutex_lock(&dev->struct_mutex); + list_for_each_entry_safe(entry, next, list, lru) { + atomic_inc(&entry->usage); + mutex_unlock(&dev->struct_mutex); + + mutex_lock(&entry->mutex); + BUG_ON(!(entry->priv_flags & _DRM_BO_FLAG_UNFENCED)); + mutex_lock(&dev->struct_mutex); + + list_del_init(&entry->lru); + DRM_FLAG_MASKED(entry->priv_flags, 0, _DRM_BO_FLAG_UNFENCED); + wake_up_all(&entry->event_queue); + + /* + * FIXME: Might want to put back on head of list + * instead of tail here. + */ + + drm_bo_add_to_lru(entry); + mutex_unlock(&entry->mutex); + drm_bo_usage_deref_locked(&entry); + } + mutex_unlock(&dev->struct_mutex); +} +EXPORT_SYMBOL(drm_putback_buffer_objects); + +/* + * Note. The caller has to register (if applicable) + * and deregister fence object usage. + */ + +int drm_fence_buffer_objects(struct drm_device *dev, + struct list_head *list, + uint32_t fence_flags, + struct drm_fence_object *fence, + struct drm_fence_object **used_fence) +{ + struct drm_buffer_manager *bm = &dev->bm; + struct drm_buffer_object *entry; + uint32_t fence_type = 0; + uint32_t fence_class = ~0; + int count = 0; + int ret = 0; + struct list_head *l; + + mutex_lock(&dev->struct_mutex); + + if (!list) + list = &bm->unfenced; + + if (fence) + fence_class = fence->fence_class; + + list_for_each_entry(entry, list, lru) { + BUG_ON(!(entry->priv_flags & _DRM_BO_FLAG_UNFENCED)); + fence_type |= entry->new_fence_type; + if (fence_class == ~0) + fence_class = entry->new_fence_class; + else if (entry->new_fence_class != fence_class) { + DRM_ERROR("Unmatching fence classes on unfenced list: " + "%d and %d.\n", + fence_class, + entry->new_fence_class); + ret = -EINVAL; + goto out; + } + count++; + } + + if (!count) { + ret = -EINVAL; + goto out; + } + + if (fence) { + if ((fence_type & fence->type) != fence_type || + (fence->fence_class != fence_class)) { + DRM_ERROR("Given fence doesn't match buffers " + "on unfenced list.\n"); + ret = -EINVAL; + goto out; + } + } else { + mutex_unlock(&dev->struct_mutex); + ret = drm_fence_object_create(dev, fence_class, fence_type, + fence_flags | DRM_FENCE_FLAG_EMIT, + &fence); + mutex_lock(&dev->struct_mutex); + if (ret) + goto out; + } + + count = 0; + l = list->next; + while (l != list) { + prefetch(l->next); + entry = list_entry(l, struct drm_buffer_object, lru); + atomic_inc(&entry->usage); + mutex_unlock(&dev->struct_mutex); + mutex_lock(&entry->mutex); + mutex_lock(&dev->struct_mutex); + list_del_init(l); + if (entry->priv_flags & _DRM_BO_FLAG_UNFENCED) { + count++; + if (entry->fence) + drm_fence_usage_deref_locked(&entry->fence); + entry->fence = drm_fence_reference_locked(fence); + entry->fence_class = entry->new_fence_class; + entry->fence_type = entry->new_fence_type; + DRM_FLAG_MASKED(entry->priv_flags, 0, + _DRM_BO_FLAG_UNFENCED); + wake_up_all(&entry->event_queue); + drm_bo_add_to_lru(entry); + } + mutex_unlock(&entry->mutex); + drm_bo_usage_deref_locked(&entry); + l = list->next; + } + DRM_DEBUG("Fenced %d buffers\n", count); +out: + mutex_unlock(&dev->struct_mutex); + *used_fence = fence; + return ret; +} +EXPORT_SYMBOL(drm_fence_buffer_objects); + +/* + * bo->mutex locked + */ + +static int drm_bo_evict(struct drm_buffer_object *bo, unsigned mem_type, + int no_wait) +{ + int ret = 0; + struct drm_device *dev = bo->dev; + struct drm_bo_mem_reg evict_mem; + + /* + * Someone might have modified the buffer before we took the + * buffer mutex. + */ + + do { + bo->priv_flags &= ~_DRM_BO_FLAG_UNLOCKED; + + if (unlikely(bo->mem.flags & + (DRM_BO_FLAG_NO_MOVE | DRM_BO_FLAG_NO_EVICT))) + goto out_unlock; + if (unlikely(bo->priv_flags & _DRM_BO_FLAG_UNFENCED)) + goto out_unlock; + if (unlikely(bo->mem.mem_type != mem_type)) + goto out_unlock; + ret = drm_bo_wait(bo, 0, 1, no_wait, 0); + if (ret) + goto out_unlock; + + } while(bo->priv_flags & _DRM_BO_FLAG_UNLOCKED); + + evict_mem = bo->mem; + evict_mem.mm_node = NULL; + + evict_mem = bo->mem; + evict_mem.proposed_flags = dev->driver->bo_driver->evict_flags(bo); + + mutex_lock(&dev->struct_mutex); + list_del_init(&bo->lru); + mutex_unlock(&dev->struct_mutex); + + ret = drm_bo_mem_space(bo, &evict_mem, no_wait); + + if (ret) { + if (ret != -EAGAIN) + DRM_ERROR("Failed to find memory space for " + "buffer 0x%p eviction.\n", bo); + goto out; + } + + ret = drm_bo_handle_move_mem(bo, &evict_mem, 1, no_wait); + + if (ret) { + if (ret != -EAGAIN) + DRM_ERROR("Buffer eviction failed\n"); + goto out; + } + + DRM_FLAG_MASKED(bo->priv_flags, _DRM_BO_FLAG_EVICTED, + _DRM_BO_FLAG_EVICTED); + +out: + mutex_lock(&dev->struct_mutex); + if (evict_mem.mm_node) { + if (evict_mem.mm_node != bo->pinned_node) + drm_mm_put_block(evict_mem.mm_node); + evict_mem.mm_node = NULL; + } + drm_bo_add_to_lru(bo); + BUG_ON(bo->priv_flags & _DRM_BO_FLAG_UNLOCKED); +out_unlock: + mutex_unlock(&dev->struct_mutex); + + return ret; +} + +/** + * Repeatedly evict memory from the LRU for @mem_type until we create enough + * space, or we've evicted everything and there isn't enough space. + */ +static int drm_bo_mem_force_space(struct drm_device *dev, + struct drm_bo_mem_reg *mem, + uint32_t mem_type, int no_wait) +{ + struct drm_mm_node *node; + struct drm_buffer_manager *bm = &dev->bm; + struct drm_buffer_object *entry; + struct drm_mem_type_manager *man = &bm->man[mem_type]; + struct list_head *lru; + unsigned long num_pages = mem->num_pages; + int ret; + + mutex_lock(&dev->struct_mutex); + do { + node = drm_mm_search_free(&man->manager, num_pages, + mem->page_alignment, 1); + if (node) + break; + + lru = &man->lru; + if (lru->next == lru) + break; + + entry = list_entry(lru->next, struct drm_buffer_object, lru); + atomic_inc(&entry->usage); + mutex_unlock(&dev->struct_mutex); + mutex_lock(&entry->mutex); + ret = drm_bo_evict(entry, mem_type, no_wait); + mutex_unlock(&entry->mutex); + drm_bo_usage_deref_unlocked(&entry); + if (ret) + return ret; + mutex_lock(&dev->struct_mutex); + } while (1); + + if (!node) { + mutex_unlock(&dev->struct_mutex); + return -ENOMEM; + } + + node = drm_mm_get_block(node, num_pages, mem->page_alignment); + if (unlikely(!node)) { + mutex_unlock(&dev->struct_mutex); + return -ENOMEM; + } + + mutex_unlock(&dev->struct_mutex); + mem->mm_node = node; + mem->mem_type = mem_type; + return 0; +} + +static int drm_bo_mt_compatible(struct drm_mem_type_manager *man, + int disallow_fixed, + uint32_t mem_type, + uint64_t mask, uint32_t *res_mask) +{ + uint64_t cur_flags = drm_bo_type_flags(mem_type); + uint64_t flag_diff; + + if ((man->flags & _DRM_FLAG_MEMTYPE_FIXED) && disallow_fixed) + return 0; + if (man->flags & _DRM_FLAG_MEMTYPE_CACHED) + cur_flags |= DRM_BO_FLAG_CACHED; + if (man->flags & _DRM_FLAG_MEMTYPE_MAPPABLE) + cur_flags |= DRM_BO_FLAG_MAPPABLE; + if (man->flags & _DRM_FLAG_MEMTYPE_CSELECT) + DRM_FLAG_MASKED(cur_flags, mask, DRM_BO_FLAG_CACHED); + + if ((cur_flags & mask & DRM_BO_MASK_MEM) == 0) + return 0; + + if (mem_type == DRM_BO_MEM_LOCAL) { + *res_mask = cur_flags; + return 1; + } + + flag_diff = (mask ^ cur_flags); + if (flag_diff & DRM_BO_FLAG_CACHED_MAPPED) + cur_flags |= DRM_BO_FLAG_CACHED_MAPPED; + + if ((flag_diff & DRM_BO_FLAG_CACHED) && + (!(mask & DRM_BO_FLAG_CACHED) || + (mask & DRM_BO_FLAG_FORCE_CACHING))) + return 0; + + if ((flag_diff & DRM_BO_FLAG_MAPPABLE) && + ((mask & DRM_BO_FLAG_MAPPABLE) || + (mask & DRM_BO_FLAG_FORCE_MAPPABLE))) + return 0; + + *res_mask = cur_flags; + return 1; +} + +/** + * Creates space for memory region @mem according to its type. + * + * This function first searches for free space in compatible memory types in + * the priority order defined by the driver. If free space isn't found, then + * drm_bo_mem_force_space is attempted in priority order to evict and find + * space. + */ +int drm_bo_mem_space(struct drm_buffer_object *bo, + struct drm_bo_mem_reg *mem, int no_wait) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + struct drm_mem_type_manager *man; + + uint32_t num_prios = dev->driver->bo_driver->num_mem_type_prio; + const uint32_t *prios = dev->driver->bo_driver->mem_type_prio; + uint32_t i; + uint32_t mem_type = DRM_BO_MEM_LOCAL; + uint32_t cur_flags; + int type_found = 0; + int type_ok = 0; + int has_eagain = 0; + struct drm_mm_node *node = NULL; + int ret; + + mem->mm_node = NULL; + for (i = 0; i < num_prios; ++i) { + mem_type = prios[i]; + man = &bm->man[mem_type]; + + type_ok = drm_bo_mt_compatible(man, + bo->type == drm_bo_type_user, + mem_type, mem->proposed_flags, + &cur_flags); + + if (!type_ok) + continue; + + if (mem_type == DRM_BO_MEM_LOCAL) + break; + + if ((mem_type == bo->pinned_mem_type) && + (bo->pinned_node != NULL)) { + node = bo->pinned_node; + break; + } + + mutex_lock(&dev->struct_mutex); + if (man->has_type && man->use_type) { + type_found = 1; + node = drm_mm_search_free(&man->manager, mem->num_pages, + mem->page_alignment, 1); + if (node) + node = drm_mm_get_block(node, mem->num_pages, + mem->page_alignment); + } + mutex_unlock(&dev->struct_mutex); + if (node) + break; + } + + if ((type_ok && (mem_type == DRM_BO_MEM_LOCAL)) || node) { + mem->mm_node = node; + mem->mem_type = mem_type; + mem->flags = cur_flags; + return 0; + } + + if (!type_found) + return -EINVAL; + + num_prios = dev->driver->bo_driver->num_mem_busy_prio; + prios = dev->driver->bo_driver->mem_busy_prio; + + for (i = 0; i < num_prios; ++i) { + mem_type = prios[i]; + man = &bm->man[mem_type]; + + if (!man->has_type) + continue; + + if (!drm_bo_mt_compatible(man, + bo->type == drm_bo_type_user, + mem_type, + mem->proposed_flags, + &cur_flags)) + continue; + + ret = drm_bo_mem_force_space(dev, mem, mem_type, no_wait); + + if (ret == 0 && mem->mm_node) { + mem->flags = cur_flags; + return 0; + } + + if (ret == -EAGAIN) + has_eagain = 1; + } + + ret = (has_eagain) ? -EAGAIN : -ENOMEM; + return ret; +} +EXPORT_SYMBOL(drm_bo_mem_space); + +/* + * drm_bo_propose_flags: + * + * @bo: the buffer object getting new flags + * + * @new_flags: the new set of proposed flag bits + * + * @new_mask: the mask of bits changed in new_flags + * + * Modify the proposed_flag bits in @bo + */ +static int drm_bo_modify_proposed_flags (struct drm_buffer_object *bo, + uint64_t new_flags, uint64_t new_mask) +{ + uint32_t new_access; + + /* Copy unchanging bits from existing proposed_flags */ + DRM_FLAG_MASKED(new_flags, bo->mem.proposed_flags, ~new_mask); + + if (bo->type == drm_bo_type_user && + ((new_flags & (DRM_BO_FLAG_CACHED | DRM_BO_FLAG_FORCE_CACHING)) != + (DRM_BO_FLAG_CACHED | DRM_BO_FLAG_FORCE_CACHING))) { + DRM_ERROR("User buffers require cache-coherent memory.\n"); + return -EINVAL; + } + + if (bo->type != drm_bo_type_kernel && (new_mask & DRM_BO_FLAG_NO_EVICT) && !DRM_SUSER(DRM_CURPROC)) { + DRM_ERROR("DRM_BO_FLAG_NO_EVICT is only available to priviliged processes.\n"); + return -EPERM; + } + + if (likely(new_mask & DRM_BO_MASK_MEM) && + (bo->mem.flags & DRM_BO_FLAG_NO_EVICT) && + !DRM_SUSER(DRM_CURPROC)) { + if (likely(bo->mem.flags & new_flags & new_mask & + DRM_BO_MASK_MEM)) + new_flags = (new_flags & ~DRM_BO_MASK_MEM) | + (bo->mem.flags & DRM_BO_MASK_MEM); + else { + DRM_ERROR("Incompatible memory type specification " + "for NO_EVICT buffer.\n"); + return -EPERM; + } + } + + if ((new_flags & DRM_BO_FLAG_NO_MOVE)) { + DRM_ERROR("DRM_BO_FLAG_NO_MOVE is not properly implemented yet.\n"); + return -EPERM; + } + + new_access = new_flags & (DRM_BO_FLAG_EXE | DRM_BO_FLAG_WRITE | + DRM_BO_FLAG_READ); + + if (new_access == 0) { + DRM_ERROR("Invalid buffer object rwx properties\n"); + return -EINVAL; + } + + bo->mem.proposed_flags = new_flags; + return 0; +} + +/* + * Call bo->mutex locked. + * Returns -EBUSY if the buffer is currently rendered to or from. 0 otherwise. + * Doesn't do any fence flushing as opposed to the drm_bo_busy function. + */ + +int drm_bo_quick_busy(struct drm_buffer_object *bo, int check_unfenced) +{ + struct drm_fence_object *fence = bo->fence; + + if (check_unfenced && (bo->priv_flags & _DRM_BO_FLAG_UNFENCED)) + return -EBUSY; + + if (fence) { + if (drm_fence_object_signaled(fence, bo->fence_type)) { + drm_fence_usage_deref_unlocked(&bo->fence); + return 0; + } + return -EBUSY; + } + return 0; +} + +int drm_bo_evict_cached(struct drm_buffer_object *bo) +{ + int ret = 0; + + BUG_ON(bo->priv_flags & _DRM_BO_FLAG_UNFENCED); + if (bo->mem.mm_node) + ret = drm_bo_evict(bo, DRM_BO_MEM_TT, 1); + return ret; +} + +EXPORT_SYMBOL(drm_bo_evict_cached); +/* + * Wait until a buffer is unmapped. + */ + +static int drm_bo_wait_unmapped(struct drm_buffer_object *bo, int no_wait) +{ + int ret = 0; + + if (likely(atomic_read(&bo->mapped)) == 0) + return 0; + + if (unlikely(no_wait)) + return -EBUSY; + + do { + mutex_unlock(&bo->mutex); + ret = wait_event_interruptible(bo->event_queue, + atomic_read(&bo->mapped) == 0); + mutex_lock(&bo->mutex); + bo->priv_flags |= _DRM_BO_FLAG_UNLOCKED; + + if (ret == -ERESTARTSYS) + ret = -EAGAIN; + } while((ret == 0) && atomic_read(&bo->mapped) > 0); + + return ret; +} + +/* + * bo->mutex locked. + * Note that new_mem_flags are NOT transferred to the bo->mem.proposed_flags. + */ + +int drm_bo_move_buffer(struct drm_buffer_object *bo, uint64_t new_mem_flags, + int no_wait, int move_unfenced) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + int ret = 0; + struct drm_bo_mem_reg mem; + + BUG_ON(bo->fence != NULL); + + mem.num_pages = bo->num_pages; + mem.size = mem.num_pages << PAGE_SHIFT; + mem.proposed_flags = new_mem_flags; + mem.page_alignment = bo->mem.page_alignment; + + mutex_lock(&bm->evict_mutex); + mutex_lock(&dev->struct_mutex); + list_del_init(&bo->lru); + mutex_unlock(&dev->struct_mutex); + + /* + * Determine where to move the buffer. + */ + ret = drm_bo_mem_space(bo, &mem, no_wait); + if (ret) + goto out_unlock; + + ret = drm_bo_handle_move_mem(bo, &mem, 0, no_wait); + +out_unlock: + mutex_lock(&dev->struct_mutex); + if (ret || !move_unfenced) { + if (mem.mm_node) { + if (mem.mm_node != bo->pinned_node) + drm_mm_put_block(mem.mm_node); + mem.mm_node = NULL; + } + drm_bo_add_to_lru(bo); + if (bo->priv_flags & _DRM_BO_FLAG_UNFENCED) { + wake_up_all(&bo->event_queue); + DRM_FLAG_MASKED(bo->priv_flags, 0, + _DRM_BO_FLAG_UNFENCED); + } + } else { + list_add_tail(&bo->lru, &bm->unfenced); + DRM_FLAG_MASKED(bo->priv_flags, _DRM_BO_FLAG_UNFENCED, + _DRM_BO_FLAG_UNFENCED); + } + /* clear the clean flags */ + bo->mem.flags &= ~DRM_BO_FLAG_CLEAN; + bo->mem.proposed_flags &= ~DRM_BO_FLAG_CLEAN; + + mutex_unlock(&dev->struct_mutex); + mutex_unlock(&bm->evict_mutex); + return ret; +} + +static int drm_bo_mem_compat(struct drm_bo_mem_reg *mem) +{ + uint32_t flag_diff = (mem->proposed_flags ^ mem->flags); + + if ((mem->proposed_flags & mem->flags & DRM_BO_MASK_MEM) == 0) + return 0; + if ((flag_diff & DRM_BO_FLAG_CACHED) && + (/* !(mem->proposed_flags & DRM_BO_FLAG_CACHED) ||*/ + (mem->proposed_flags & DRM_BO_FLAG_FORCE_CACHING))) + return 0; + + if ((flag_diff & DRM_BO_FLAG_MAPPABLE) && + ((mem->proposed_flags & DRM_BO_FLAG_MAPPABLE) || + (mem->proposed_flags & DRM_BO_FLAG_FORCE_MAPPABLE))) + return 0; + return 1; +} + +/** + * drm_buffer_object_validate: + * + * @bo: the buffer object to modify + * + * @fence_class: the new fence class covering this buffer + * + * @move_unfenced: a boolean indicating whether switching the + * memory space of this buffer should cause the buffer to + * be placed on the unfenced list. + * + * @no_wait: whether this function should return -EBUSY instead + * of waiting. + * + * Change buffer access parameters. This can involve moving + * the buffer to the correct memory type, pinning the buffer + * or changing the class/type of fence covering this buffer + * + * Must be called with bo locked. + */ + +static int drm_buffer_object_validate(struct drm_buffer_object *bo, + uint32_t fence_class, + int move_unfenced, int no_wait, + int move_buffer) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + int ret; + + if (move_buffer) { + ret = drm_bo_move_buffer(bo, bo->mem.proposed_flags, no_wait, + move_unfenced); + if (ret) { + if (ret != -EAGAIN) + DRM_ERROR("Failed moving buffer.\n"); + if (ret == -ENOMEM) + DRM_ERROR("Out of aperture space or " + "DRM memory quota.\n"); + return ret; + } + } + + /* + * Pinned buffers. + */ + + if (bo->mem.proposed_flags & (DRM_BO_FLAG_NO_EVICT | DRM_BO_FLAG_NO_MOVE)) { + bo->pinned_mem_type = bo->mem.mem_type; + mutex_lock(&dev->struct_mutex); + list_del_init(&bo->pinned_lru); + drm_bo_add_to_pinned_lru(bo); + + if (bo->pinned_node != bo->mem.mm_node) { + if (bo->pinned_node != NULL) + drm_mm_put_block(bo->pinned_node); + bo->pinned_node = bo->mem.mm_node; + } + + mutex_unlock(&dev->struct_mutex); + + } else if (bo->pinned_node != NULL) { + + mutex_lock(&dev->struct_mutex); + + if (bo->pinned_node != bo->mem.mm_node) + drm_mm_put_block(bo->pinned_node); + + list_del_init(&bo->pinned_lru); + bo->pinned_node = NULL; + mutex_unlock(&dev->struct_mutex); + + } + + /* + * We might need to add a TTM. + */ + + if (bo->mem.mem_type == DRM_BO_MEM_LOCAL && bo->ttm == NULL) { + ret = drm_bo_add_ttm(bo); + if (ret) + return ret; + } + /* + * Validation has succeeded, move the access and other + * non-mapping-related flag bits from the proposed flags to + * the active flags + */ + + DRM_FLAG_MASKED(bo->mem.flags, bo->mem.proposed_flags, ~DRM_BO_MASK_MEMTYPE); + + /* + * Finally, adjust lru to be sure. + */ + + mutex_lock(&dev->struct_mutex); + list_del(&bo->lru); + if (move_unfenced) { + list_add_tail(&bo->lru, &bm->unfenced); + DRM_FLAG_MASKED(bo->priv_flags, _DRM_BO_FLAG_UNFENCED, + _DRM_BO_FLAG_UNFENCED); + } else { + drm_bo_add_to_lru(bo); + if (bo->priv_flags & _DRM_BO_FLAG_UNFENCED) { + wake_up_all(&bo->event_queue); + DRM_FLAG_MASKED(bo->priv_flags, 0, + _DRM_BO_FLAG_UNFENCED); + } + } + mutex_unlock(&dev->struct_mutex); + + return 0; +} + +/* + * This function is called with bo->mutex locked, but may release it + * temporarily to wait for events. + */ + +static int drm_bo_prepare_for_validate(struct drm_buffer_object *bo, + uint64_t flags, + uint64_t mask, + uint32_t hint, + uint32_t fence_class, + int no_wait, + int *move_buffer) +{ + struct drm_device *dev = bo->dev; + struct drm_bo_driver *driver = dev->driver->bo_driver; + uint32_t ftype; + + int ret; + + + ret = drm_bo_modify_proposed_flags (bo, flags, mask); + if (ret) + return ret; + + DRM_DEBUG("Proposed flags 0x%016llx, Old flags 0x%016llx\n", + (unsigned long long) bo->mem.proposed_flags, + (unsigned long long) bo->mem.flags); + + ret = drm_bo_wait_unmapped(bo, no_wait); + if (ret) + return ret; + + ret = driver->fence_type(bo, &fence_class, &ftype); + + if (ret) { + DRM_ERROR("Driver did not support given buffer permissions.\n"); + return ret; + } + + /* + * We're switching command submission mechanism, + * or cannot simply rely on the hardware serializing for us. + * Insert a driver-dependant barrier or wait for buffer idle. + */ + + if ((fence_class != bo->fence_class) || + ((ftype ^ bo->fence_type) & bo->fence_type)) { + + ret = -EINVAL; + if (driver->command_stream_barrier) { + ret = driver->command_stream_barrier(bo, + fence_class, + ftype, + no_wait); + } + if (ret && ret != -EAGAIN) + ret = drm_bo_wait(bo, 0, 1, no_wait, 1); + + if (ret) + return ret; + } + + bo->new_fence_class = fence_class; + bo->new_fence_type = ftype; + + /* + * Check whether we need to move buffer. + */ + + *move_buffer = 0; + if (!drm_bo_mem_compat(&bo->mem)) { + *move_buffer = 1; + ret = drm_bo_wait(bo, 0, 1, no_wait, 1); + } + + return ret; +} + +/** + * drm_bo_do_validate: + * + * @bo: the buffer object + * + * @flags: access rights, mapping parameters and cacheability. See + * the DRM_BO_FLAG_* values in drm.h + * + * @mask: Which flag values to change; this allows callers to modify + * things without knowing the current state of other flags. + * + * @hint: changes the proceedure for this operation, see the DRM_BO_HINT_* + * values in drm.h. + * + * @fence_class: a driver-specific way of doing fences. Presumably, + * this would be used if the driver had more than one submission and + * fencing mechanism. At this point, there isn't any use of this + * from the user mode code. + * + * @rep: To be stuffed with the reply from validation + * + * 'validate' a buffer object. This changes where the buffer is + * located, along with changing access modes. + */ + +int drm_bo_do_validate(struct drm_buffer_object *bo, + uint64_t flags, uint64_t mask, uint32_t hint, + uint32_t fence_class) +{ + int ret; + int no_wait = (hint & DRM_BO_HINT_DONT_BLOCK) != 0; + int move_buffer; + + mutex_lock(&bo->mutex); + + do { + bo->priv_flags &= ~_DRM_BO_FLAG_UNLOCKED; + + ret = drm_bo_prepare_for_validate(bo, flags, mask, hint, + fence_class, no_wait, + &move_buffer); + if (ret) + goto out; + + } while(unlikely(bo->priv_flags & _DRM_BO_FLAG_UNLOCKED)); + + ret = drm_buffer_object_validate(bo, + fence_class, + !(hint & DRM_BO_HINT_DONT_FENCE), + no_wait, + move_buffer); + + BUG_ON(bo->priv_flags & _DRM_BO_FLAG_UNLOCKED); +out: + mutex_unlock(&bo->mutex); + + return ret; +} +EXPORT_SYMBOL(drm_bo_do_validate); + +int drm_buffer_object_create(struct drm_device *dev, + unsigned long size, + enum drm_bo_type type, + uint64_t flags, + uint32_t hint, + uint32_t page_alignment, + unsigned long buffer_start, + struct drm_buffer_object **buf_obj) +{ + struct drm_buffer_manager *bm = &dev->bm; + struct drm_buffer_object *bo; + int ret = 0; + unsigned long num_pages; + + size += buffer_start & ~PAGE_MASK; + num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; + if (num_pages == 0) { + DRM_ERROR("Illegal buffer object size %ld.\n", size); + return -EINVAL; + } + + bo = drm_ctl_calloc(1, sizeof(*bo), DRM_MEM_BUFOBJ); + + if (!bo) + return -ENOMEM; + + mutex_init(&bo->mutex); + mutex_lock(&bo->mutex); + + atomic_set(&bo->usage, 1); + atomic_set(&bo->mapped, 0); + DRM_INIT_WAITQUEUE(&bo->event_queue); + INIT_LIST_HEAD(&bo->lru); + INIT_LIST_HEAD(&bo->pinned_lru); + INIT_LIST_HEAD(&bo->ddestroy); +#ifdef DRM_ODD_MM_COMPAT + INIT_LIST_HEAD(&bo->p_mm_list); + INIT_LIST_HEAD(&bo->vma_list); +#endif + bo->dev = dev; + bo->type = type; + bo->num_pages = num_pages; + bo->mem.mem_type = DRM_BO_MEM_LOCAL; + bo->mem.num_pages = bo->num_pages; + bo->mem.mm_node = NULL; + bo->mem.page_alignment = page_alignment; + bo->buffer_start = buffer_start & PAGE_MASK; + bo->priv_flags = 0; + bo->mem.flags = (DRM_BO_FLAG_MEM_LOCAL | DRM_BO_FLAG_CACHED | + DRM_BO_FLAG_MAPPABLE | DRM_BO_FLAG_CLEAN); + bo->mem.proposed_flags = 0; + atomic_inc(&bm->count); + /* + * Use drm_bo_modify_proposed_flags to error-check the proposed flags + */ + flags |= DRM_BO_FLAG_CLEAN; + + ret = drm_bo_modify_proposed_flags (bo, flags, flags); + if (ret) + goto out_err; + + /* + * For drm_bo_type_device buffers, allocate + * address space from the device so that applications + * can mmap the buffer from there + */ + if (bo->type == drm_bo_type_device) { + mutex_lock(&dev->struct_mutex); + ret = drm_bo_setup_vm_locked(bo); + mutex_unlock(&dev->struct_mutex); + if (ret) + goto out_err; + } + + mutex_unlock(&bo->mutex); + ret = drm_bo_do_validate(bo, 0, 0, hint | DRM_BO_HINT_DONT_FENCE, + 0); + if (ret) + goto out_err_unlocked; + + *buf_obj = bo; + return 0; + +out_err: + mutex_unlock(&bo->mutex); +out_err_unlocked: + drm_bo_usage_deref_unlocked(&bo); + return ret; +} +EXPORT_SYMBOL(drm_buffer_object_create); + +static int drm_bo_leave_list(struct drm_buffer_object *bo, + uint32_t mem_type, + ... [truncated message content] |
From: Jesse B. <jb...@vi...> - 2008-10-30 22:08:52
|
commit 2f42bfd09aee286c1256a65deaa998d013a7db0a Author: Jesse Barnes <jb...@vi...> Date: Thu Oct 30 13:37:15 2008 -0700 DRM: i915 mode setting support This commit adds support to the i915 driver for the new DRM mode setting interfaces. When the new 'modeset' module argument is set at driver load time, the i915 driver will assume it is in full control of the video hardware, including all outputs, registers, and ring buffers. Like the core code, much of this code came from the X.Org xf86-video-intel driver, with modifications from Dave, Jesse and Jakob. Signed-off-by: Jesse Barnes <jb...@vi...> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index d8fb5d8..25b35dd 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -3,12 +3,16 @@ # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher. ccflags-y := -Iinclude/drm -i915-y := i915_drv.o i915_dma.o i915_irq.o i915_mem.o \ - i915_suspend.o \ - i915_gem.o \ - i915_gem_debug.o \ - i915_gem_proc.o \ - i915_gem_tiling.o +i915-y := i915_drv.o i915_dma.o i915_irq.o i915_mem.o i915_init.o \ + i915_suspend.o \ + i915_gem.o \ + i915_gem_debug.o \ + i915_gem_proc.o \ + i915_gem_tiling.o \ + intel_display.o intel_crt.o intel_lvds.o intel_bios.o \ + intel_sdvo.o intel_modes.o intel_i2c.o i915_init.o intel_fb.o \ + intel_tv.o intel_dvo.o dvo_ch7xxx.o \ + dvo_ch7017.o dvo_ivch.o dvo_tfp410.o dvo_sil164.o i915-$(CONFIG_ACPI) += i915_opregion.o i915-$(CONFIG_COMPAT) += i915_ioc32.o diff --git a/drivers/gpu/drm/i915/dvo.h b/drivers/gpu/drm/i915/dvo.h new file mode 100644 index 0000000..b122ea1 --- /dev/null +++ b/drivers/gpu/drm/i915/dvo.h @@ -0,0 +1,159 @@ +/* + * Copyright © 2006 Eric Anholt + * + * Permission to use, copy, modify, distribute, and sell this software and its + * documentation for any purpose is hereby granted without fee, provided that + * the above copyright notice appear in all copies and that both that copyright + * notice and this permission notice appear in supporting documentation, and + * that the name of the copyright holders not be used in advertising or + * publicity pertaining to distribution of the software without specific, + * written prior permission. The copyright holders make no representations + * about the suitability of this software for any purpose. It is provided "as + * is" without express or implied warranty. + * + * THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, + * INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO + * EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR + * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, + * DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER + * TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE + * OF THIS SOFTWARE. + */ + +#ifndef _INTEL_DVO_H +#define _INTEL_DVO_H + +#include <linux/i2c.h> +#include "drmP.h" +#include "drm.h" +#include "drm_crtc.h" +#include "intel_drv.h" + +struct intel_dvo_device { + char *name; + int type; + /* DVOA/B/C output register */ + u32 dvo_reg; + /* GPIO register used for i2c bus to control this device */ + u32 gpio; + int slave_addr; + struct intel_i2c_chan *i2c_bus; + + const struct intel_dvo_dev_ops *dev_ops; + void *dev_priv; + + struct drm_display_mode *panel_fixed_mode; + bool panel_wants_dither; +}; + +struct intel_dvo_dev_ops { + /* + * Initialize the device at startup time. + * Returns NULL if the device does not exist. + */ + bool (*init)(struct intel_dvo_device *dvo, + struct intel_i2c_chan *i2cbus); + + /* + * Called to allow the output a chance to create properties after the + * RandR objects have been created. + */ + void (*create_resources)(struct intel_dvo_device *dvo); + + /* + * Turn on/off output or set intermediate power levels if available. + * + * Unsupported intermediate modes drop to the lower power setting. + * If the mode is DPMSModeOff, the output must be disabled, + * as the DPLL may be disabled afterwards. + */ + void (*dpms)(struct intel_dvo_device *dvo, int mode); + + /* + * Saves the output's state for restoration on VT switch. + */ + void (*save)(struct intel_dvo_device *dvo); + + /* + * Restore's the output's state at VT switch. + */ + void (*restore)(struct intel_dvo_device *dvo); + + /* + * Callback for testing a video mode for a given output. + * + * This function should only check for cases where a mode can't + * be supported on the output specifically, and not represent + * generic CRTC limitations. + * + * \return MODE_OK if the mode is valid, or another MODE_* otherwise. + */ + int (*mode_valid)(struct intel_dvo_device *dvo, + struct drm_display_mode *mode); + + /* + * Callback to adjust the mode to be set in the CRTC. + * + * This allows an output to adjust the clock or even the entire set of + * timings, which is used for panels with fixed timings or for + * buses with clock limitations. + */ + bool (*mode_fixup)(struct intel_dvo_device *dvo, + struct drm_display_mode *mode, + struct drm_display_mode *adjusted_mode); + + /* + * Callback for preparing mode changes on an output + */ + void (*prepare)(struct intel_dvo_device *dvo); + + /* + * Callback for committing mode changes on an output + */ + void (*commit)(struct intel_dvo_device *dvo); + + /* + * Callback for setting up a video mode after fixups have been made. + * + * This is only called while the output is disabled. The dpms callback + * must be all that's necessary for the output, to turn the output on + * after this function is called. + */ + void (*mode_set)(struct intel_dvo_device *dvo, + struct drm_display_mode *mode, + struct drm_display_mode *adjusted_mode); + + /* + * Probe for a connected output, and return detect_status. + */ + enum drm_connector_status (*detect)(struct intel_dvo_device *dvo); + + /** + * Query the device for the modes it provides. + * + * This function may also update MonInfo, mm_width, and mm_height. + * + * \return singly-linked list of modes or NULL if no modes found. + */ + struct drm_display_mode *(*get_modes)(struct intel_dvo_device *dvo); + +#ifdef RANDR_12_INTERFACE + /** + * Callback when an output's property has changed. + */ + bool (*set_property)(struct intel_dvo_device *dvo, + struct drm_property *property, uint64_t val); +#endif + + /** + * Clean up driver-specific bits of the output + */ + void (*destroy) (struct intel_dvo_device *dvo); + + /** + * Debugging hook to dump device registers to log file + */ + void (*dump_regs)(struct intel_dvo_device *dvo); +}; + +#endif /* _INTEL_DVO_H */ diff --git a/drivers/gpu/drm/i915/dvo_ch7017.c b/drivers/gpu/drm/i915/dvo_ch7017.c new file mode 100644 index 0000000..b10e038 --- /dev/null +++ b/drivers/gpu/drm/i915/dvo_ch7017.c @@ -0,0 +1,454 @@ +/* + * Copyright © 2006 Intel Corporation + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + * + * Authors: + * Eric Anholt <er...@an...> + * + */ + +#include "dvo.h" + +#define CH7017_TV_DISPLAY_MODE 0x00 +#define CH7017_FLICKER_FILTER 0x01 +#define CH7017_VIDEO_BANDWIDTH 0x02 +#define CH7017_TEXT_ENHANCEMENT 0x03 +#define CH7017_START_ACTIVE_VIDEO 0x04 +#define CH7017_HORIZONTAL_POSITION 0x05 +#define CH7017_VERTICAL_POSITION 0x06 +#define CH7017_BLACK_LEVEL 0x07 +#define CH7017_CONTRAST_ENHANCEMENT 0x08 +#define CH7017_TV_PLL 0x09 +#define CH7017_TV_PLL_M 0x0a +#define CH7017_TV_PLL_N 0x0b +#define CH7017_SUB_CARRIER_0 0x0c +#define CH7017_CIV_CONTROL 0x10 +#define CH7017_CIV_0 0x11 +#define CH7017_CHROMA_BOOST 0x14 +#define CH7017_CLOCK_MODE 0x1c +#define CH7017_INPUT_CLOCK 0x1d +#define CH7017_GPIO_CONTROL 0x1e +#define CH7017_INPUT_DATA_FORMAT 0x1f +#define CH7017_CONNECTION_DETECT 0x20 +#define CH7017_DAC_CONTROL 0x21 +#define CH7017_BUFFERED_CLOCK_OUTPUT 0x22 +#define CH7017_DEFEAT_VSYNC 0x47 +#define CH7017_TEST_PATTERN 0x48 + +#define CH7017_POWER_MANAGEMENT 0x49 +/** Enables the TV output path. */ +#define CH7017_TV_EN (1 << 0) +#define CH7017_DAC0_POWER_DOWN (1 << 1) +#define CH7017_DAC1_POWER_DOWN (1 << 2) +#define CH7017_DAC2_POWER_DOWN (1 << 3) +#define CH7017_DAC3_POWER_DOWN (1 << 4) +/** Powers down the TV out block, and DAC0-3 */ +#define CH7017_TV_POWER_DOWN_EN (1 << 5) + +#define CH7017_VERSION_ID 0x4a + +#define CH7017_DEVICE_ID 0x4b +#define CH7017_DEVICE_ID_VALUE 0x1b +#define CH7018_DEVICE_ID_VALUE 0x1a +#define CH7019_DEVICE_ID_VALUE 0x19 + +#define CH7017_XCLK_D2_ADJUST 0x53 +#define CH7017_UP_SCALER_COEFF_0 0x55 +#define CH7017_UP_SCALER_COEFF_1 0x56 +#define CH7017_UP_SCALER_COEFF_2 0x57 +#define CH7017_UP_SCALER_COEFF_3 0x58 +#define CH7017_UP_SCALER_COEFF_4 0x59 +#define CH7017_UP_SCALER_VERTICAL_INC_0 0x5a +#define CH7017_UP_SCALER_VERTICAL_INC_1 0x5b +#define CH7017_GPIO_INVERT 0x5c +#define CH7017_UP_SCALER_HORIZONTAL_INC_0 0x5d +#define CH7017_UP_SCALER_HORIZONTAL_INC_1 0x5e + +#define CH7017_HORIZONTAL_ACTIVE_PIXEL_INPUT 0x5f +/**< Low bits of horizontal active pixel input */ + +#define CH7017_ACTIVE_INPUT_LINE_OUTPUT 0x60 +/** High bits of horizontal active pixel input */ +#define CH7017_LVDS_HAP_INPUT_MASK (0x7 << 0) +/** High bits of vertical active line output */ +#define CH7017_LVDS_VAL_HIGH_MASK (0x7 << 3) + +#define CH7017_VERTICAL_ACTIVE_LINE_OUTPUT 0x61 +/**< Low bits of vertical active line output */ + +#define CH7017_HORIZONTAL_ACTIVE_PIXEL_OUTPUT 0x62 +/**< Low bits of horizontal active pixel output */ + +#define CH7017_LVDS_POWER_DOWN 0x63 +/** High bits of horizontal active pixel output */ +#define CH7017_LVDS_HAP_HIGH_MASK (0x7 << 0) +/** Enables the LVDS power down state transition */ +#define CH7017_LVDS_POWER_DOWN_EN (1 << 6) +/** Enables the LVDS upscaler */ +#define CH7017_LVDS_UPSCALER_EN (1 << 7) +#define CH7017_LVDS_POWER_DOWN_DEFAULT_RESERVED 0x08 + +#define CH7017_LVDS_ENCODING 0x64 +#define CH7017_LVDS_DITHER_2D (1 << 2) +#define CH7017_LVDS_DITHER_DIS (1 << 3) +#define CH7017_LVDS_DUAL_CHANNEL_EN (1 << 4) +#define CH7017_LVDS_24_BIT (1 << 5) + +#define CH7017_LVDS_ENCODING_2 0x65 + +#define CH7017_LVDS_PLL_CONTROL 0x66 +/** Enables the LVDS panel output path */ +#define CH7017_LVDS_PANEN (1 << 0) +/** Enables the LVDS panel backlight */ +#define CH7017_LVDS_BKLEN (1 << 3) + +#define CH7017_POWER_SEQUENCING_T1 0x67 +#define CH7017_POWER_SEQUENCING_T2 0x68 +#define CH7017_POWER_SEQUENCING_T3 0x69 +#define CH7017_POWER_SEQUENCING_T4 0x6a +#define CH7017_POWER_SEQUENCING_T5 0x6b +#define CH7017_GPIO_DRIVER_TYPE 0x6c +#define CH7017_GPIO_DATA 0x6d +#define CH7017_GPIO_DIRECTION_CONTROL 0x6e + +#define CH7017_LVDS_PLL_FEEDBACK_DIV 0x71 +# define CH7017_LVDS_PLL_FEED_BACK_DIVIDER_SHIFT 4 +# define CH7017_LVDS_PLL_FEED_FORWARD_DIVIDER_SHIFT 0 +# define CH7017_LVDS_PLL_FEEDBACK_DEFAULT_RESERVED 0x80 + +#define CH7017_LVDS_PLL_VCO_CONTROL 0x72 +# define CH7017_LVDS_PLL_VCO_DEFAULT_RESERVED 0x80 +# define CH7017_LVDS_PLL_VCO_SHIFT 4 +# define CH7017_LVDS_PLL_POST_SCALE_DIV_SHIFT 0 + +#define CH7017_OUTPUTS_ENABLE 0x73 +# define CH7017_CHARGE_PUMP_LOW 0x0 +# define CH7017_CHARGE_PUMP_HIGH 0x3 +# define CH7017_LVDS_CHANNEL_A (1 << 3) +# define CH7017_LVDS_CHANNEL_B (1 << 4) +# define CH7017_TV_DAC_A (1 << 5) +# define CH7017_TV_DAC_B (1 << 6) +# define CH7017_DDC_SELECT_DC2 (1 << 7) + +#define CH7017_LVDS_OUTPUT_AMPLITUDE 0x74 +#define CH7017_LVDS_PLL_EMI_REDUCTION 0x75 +#define CH7017_LVDS_POWER_DOWN_FLICKER 0x76 + +#define CH7017_LVDS_CONTROL_2 0x78 +# define CH7017_LOOP_FILTER_SHIFT 5 +# define CH7017_PHASE_DETECTOR_SHIFT 0 + +#define CH7017_BANG_LIMIT_CONTROL 0x7f + +struct ch7017_priv { + uint8_t save_hapi; + uint8_t save_vali; + uint8_t save_valo; + uint8_t save_ailo; + uint8_t save_lvds_pll_vco; + uint8_t save_feedback_div; + uint8_t save_lvds_control_2; + uint8_t save_outputs_enable; + uint8_t save_lvds_power_down; + uint8_t save_power_management; +}; + +static void ch7017_dump_regs(struct intel_dvo_device *dvo); +static void ch7017_dpms(struct intel_dvo_device *dvo, int mode); + +static bool ch7017_read(struct intel_dvo_device *dvo, int addr, uint8_t *val) +{ + struct intel_i2c_chan *i2cbus = dvo->i2c_bus; + u8 out_buf[2]; + u8 in_buf[2]; + + struct i2c_msg msgs[] = { + { + .addr = i2cbus->slave_addr, + .flags = 0, + .len = 1, + .buf = out_buf, + }, + { + .addr = i2cbus->slave_addr, + .flags = I2C_M_RD, + .len = 1, + .buf = in_buf, + } + }; + + out_buf[0] = addr; + out_buf[1] = 0; + + if (i2c_transfer(&i2cbus->adapter, msgs, 2) == 2) { + *val= in_buf[0]; + return true; + }; + + return false; +} + +static bool ch7017_write(struct intel_dvo_device *dvo, int addr, uint8_t val) +{ + struct intel_i2c_chan *i2cbus = dvo->i2c_bus; + uint8_t out_buf[2]; + struct i2c_msg msg = { + .addr = i2cbus->slave_addr, + .flags = 0, + .len = 2, + .buf = out_buf, + }; + + out_buf[0] = addr; + out_buf[1] = val; + + if (i2c_transfer(&i2cbus->adapter, &msg, 1) == 1) + return true; + + return false; +} + +/** Probes for a CH7017 on the given bus and slave address. */ +static bool ch7017_init(struct intel_dvo_device *dvo, + struct intel_i2c_chan *i2cbus) +{ + struct ch7017_priv *priv; + uint8_t val; + + priv = kzalloc(sizeof(struct ch7017_priv), GFP_KERNEL); + if (priv == NULL) + return false; + + dvo->i2c_bus = i2cbus; + dvo->i2c_bus->slave_addr = dvo->slave_addr; + dvo->dev_priv = priv; + + if (!ch7017_read(dvo, CH7017_DEVICE_ID, &val)) + goto fail; + + if (val != CH7017_DEVICE_ID_VALUE && + val != CH7018_DEVICE_ID_VALUE && + val != CH7019_DEVICE_ID_VALUE) { + DRM_DEBUG("ch701x not detected, got %d: from %s Slave %d.\n", + val, i2cbus->adapter.name,i2cbus->slave_addr); + goto fail; + } + + return true; +fail: + kfree(priv); + return false; +} + +static enum drm_connector_status ch7017_detect(struct intel_dvo_device *dvo) +{ + return connector_status_unknown; +} + +static enum drm_mode_status ch7017_mode_valid(struct intel_dvo_device *dvo, + struct drm_display_mode *mode) +{ + if (mode->clock > 160000) + return MODE_CLOCK_HIGH; + + return MODE_OK; +} + +static void ch7017_mode_set(struct intel_dvo_device *dvo, + struct drm_display_mode *mode, + struct drm_display_mode *adjusted_mode) +{ + uint8_t lvds_pll_feedback_div, lvds_pll_vco_control; + uint8_t outputs_enable, lvds_control_2, lvds_power_down; + uint8_t horizontal_active_pixel_input; + uint8_t horizontal_active_pixel_output, vertical_active_line_output; + uint8_t active_input_line_output; + + DRM_DEBUG("Registers before mode setting\n"); + ch7017_dump_regs(dvo); + + /* LVDS PLL settings from page 75 of 7017-7017ds.pdf*/ + if (mode->clock < 100000) { + outputs_enable = CH7017_LVDS_CHANNEL_A | CH7017_CHARGE_PUMP_LOW; + lvds_pll_feedback_div = CH7017_LVDS_PLL_FEEDBACK_DEFAULT_RESERVED | + (2 << CH7017_LVDS_PLL_FEED_BACK_DIVIDER_SHIFT) | + (13 << CH7017_LVDS_PLL_FEED_FORWARD_DIVIDER_SHIFT); + lvds_pll_vco_control = CH7017_LVDS_PLL_VCO_DEFAULT_RESERVED | + (2 << CH7017_LVDS_PLL_VCO_SHIFT) | + (3 << CH7017_LVDS_PLL_POST_SCALE_DIV_SHIFT); + lvds_control_2 = (1 << CH7017_LOOP_FILTER_SHIFT) | + (0 << CH7017_PHASE_DETECTOR_SHIFT); + } else { + outputs_enable = CH7017_LVDS_CHANNEL_A | CH7017_CHARGE_PUMP_HIGH; + lvds_pll_feedback_div = CH7017_LVDS_PLL_FEEDBACK_DEFAULT_RESERVED | + (2 << CH7017_LVDS_PLL_FEED_BACK_DIVIDER_SHIFT) | + (3 << CH7017_LVDS_PLL_FEED_FORWARD_DIVIDER_SHIFT); + lvds_pll_feedback_div = 35; + lvds_control_2 = (3 << CH7017_LOOP_FILTER_SHIFT) | + (0 << CH7017_PHASE_DETECTOR_SHIFT); + if (1) { /* XXX: dual channel panel detection. Assume yes for now. */ + outputs_enable |= CH7017_LVDS_CHANNEL_B; + lvds_pll_vco_control = CH7017_LVDS_PLL_VCO_DEFAULT_RESERVED | + (2 << CH7017_LVDS_PLL_VCO_SHIFT) | + (13 << CH7017_LVDS_PLL_POST_SCALE_DIV_SHIFT); + } else { + lvds_pll_vco_control = CH7017_LVDS_PLL_VCO_DEFAULT_RESERVED | + (1 << CH7017_LVDS_PLL_VCO_SHIFT) | + (13 << CH7017_LVDS_PLL_POST_SCALE_DIV_SHIFT); + } + } + + horizontal_active_pixel_input = mode->hdisplay & 0x00ff; + + vertical_active_line_output = mode->vdisplay & 0x00ff; + horizontal_active_pixel_output = mode->hdisplay & 0x00ff; + + active_input_line_output = ((mode->hdisplay & 0x0700) >> 8) | + (((mode->vdisplay & 0x0700) >> 8) << 3); + + lvds_power_down = CH7017_LVDS_POWER_DOWN_DEFAULT_RESERVED | + (mode->hdisplay & 0x0700) >> 8; + + ch7017_dpms(dvo, DRM_MODE_DPMS_OFF); + ch7017_write(dvo, CH7017_HORIZONTAL_ACTIVE_PIXEL_INPUT, + horizontal_active_pixel_input); + ch7017_write(dvo, CH7017_HORIZONTAL_ACTIVE_PIXEL_OUTPUT, + horizontal_active_pixel_output); + ch7017_write(dvo, CH7017_VERTICAL_ACTIVE_LINE_OUTPUT, + vertical_active_line_output); + ch7017_write(dvo, CH7017_ACTIVE_INPUT_LINE_OUTPUT, + active_input_line_output); + ch7017_write(dvo, CH7017_LVDS_PLL_VCO_CONTROL, lvds_pll_vco_control); + ch7017_write(dvo, CH7017_LVDS_PLL_FEEDBACK_DIV, lvds_pll_feedback_div); + ch7017_write(dvo, CH7017_LVDS_CONTROL_2, lvds_control_2); + ch7017_write(dvo, CH7017_OUTPUTS_ENABLE, outputs_enable); + + /* Turn the LVDS back on with new settings. */ + ch7017_write(dvo, CH7017_LVDS_POWER_DOWN, lvds_power_down); + + DRM_DEBUG("Registers after mode setting\n"); + ch7017_dump_regs(dvo); +} + +/* set the CH7017 power state */ +static void ch7017_dpms(struct intel_dvo_device *dvo, int mode) +{ + uint8_t val; + + ch7017_read(dvo, CH7017_LVDS_POWER_DOWN, &val); + + /* Turn off TV/VGA, and never turn it on since we don't support it. */ + ch7017_write(dvo, CH7017_POWER_MANAGEMENT, + CH7017_DAC0_POWER_DOWN | + CH7017_DAC1_POWER_DOWN | + CH7017_DAC2_POWER_DOWN | + CH7017_DAC3_POWER_DOWN | + CH7017_TV_POWER_DOWN_EN); + + if (mode == DRM_MODE_DPMS_ON) { + /* Turn on the LVDS */ + ch7017_write(dvo, CH7017_LVDS_POWER_DOWN, + val & ~CH7017_LVDS_POWER_DOWN_EN); + } else { + /* Turn off the LVDS */ + ch7017_write(dvo, CH7017_LVDS_POWER_DOWN, + val | CH7017_LVDS_POWER_DOWN_EN); + } + + /* XXX: Should actually wait for update power status somehow */ + udelay(20000); +} + +static void ch7017_dump_regs(struct intel_dvo_device *dvo) +{ + uint8_t val; + +#define DUMP(reg) \ +do { \ + ch7017_read(dvo, reg, &val); \ + DRM_DEBUG(#reg ": %02x\n", val); \ +} while (0) + + DUMP(CH7017_HORIZONTAL_ACTIVE_PIXEL_INPUT); + DUMP(CH7017_HORIZONTAL_ACTIVE_PIXEL_OUTPUT); + DUMP(CH7017_VERTICAL_ACTIVE_LINE_OUTPUT); + DUMP(CH7017_ACTIVE_INPUT_LINE_OUTPUT); + DUMP(CH7017_LVDS_PLL_VCO_CONTROL); + DUMP(CH7017_LVDS_PLL_FEEDBACK_DIV); + DUMP(CH7017_LVDS_CONTROL_2); + DUMP(CH7017_OUTPUTS_ENABLE); + DUMP(CH7017_LVDS_POWER_DOWN); +} + +static void ch7017_save(struct intel_dvo_device *dvo) +{ + struct ch7017_priv *priv = dvo->dev_priv; + + ch7017_read(dvo, CH7017_HORIZONTAL_ACTIVE_PIXEL_INPUT, &priv->save_hapi); + ch7017_read(dvo, CH7017_VERTICAL_ACTIVE_LINE_OUTPUT, &priv->save_valo); + ch7017_read(dvo, CH7017_ACTIVE_INPUT_LINE_OUTPUT, &priv->save_ailo); + ch7017_read(dvo, CH7017_LVDS_PLL_VCO_CONTROL, &priv->save_lvds_pll_vco); + ch7017_read(dvo, CH7017_LVDS_PLL_FEEDBACK_DIV, &priv->save_feedback_div); + ch7017_read(dvo, CH7017_LVDS_CONTROL_2, &priv->save_lvds_control_2); + ch7017_read(dvo, CH7017_OUTPUTS_ENABLE, &priv->save_outputs_enable); + ch7017_read(dvo, CH7017_LVDS_POWER_DOWN, &priv->save_lvds_power_down); + ch7017_read(dvo, CH7017_POWER_MANAGEMENT, &priv->save_power_management); +} + +static void ch7017_restore(struct intel_dvo_device *dvo) +{ + struct ch7017_priv *priv = dvo->dev_priv; + + /* Power down before changing mode */ + ch7017_dpms(dvo, DRM_MODE_DPMS_OFF); + + ch7017_write(dvo, CH7017_HORIZONTAL_ACTIVE_PIXEL_INPUT, priv->save_hapi); + ch7017_write(dvo, CH7017_VERTICAL_ACTIVE_LINE_OUTPUT, priv->save_valo); + ch7017_write(dvo, CH7017_ACTIVE_INPUT_LINE_OUTPUT, priv->save_ailo); + ch7017_write(dvo, CH7017_LVDS_PLL_VCO_CONTROL, priv->save_lvds_pll_vco); + ch7017_write(dvo, CH7017_LVDS_PLL_FEEDBACK_DIV, priv->save_feedback_div); + ch7017_write(dvo, CH7017_LVDS_CONTROL_2, priv->save_lvds_control_2); + ch7017_write(dvo, CH7017_OUTPUTS_ENABLE, priv->save_outputs_enable); + ch7017_write(dvo, CH7017_LVDS_POWER_DOWN, priv->save_lvds_power_down); + ch7017_write(dvo, CH7017_POWER_MANAGEMENT, priv->save_power_management); +} + +static void ch7017_destroy(struct intel_dvo_device *dvo) +{ + struct ch7017_priv *priv = dvo->dev_priv; + + if (priv) { + kfree(priv); + dvo->dev_priv = NULL; + } +} + +struct intel_dvo_dev_ops ch7017_ops = { + .init = ch7017_init, + .detect = ch7017_detect, + .mode_valid = ch7017_mode_valid, + .mode_set = ch7017_mode_set, + .dpms = ch7017_dpms, + .dump_regs = ch7017_dump_regs, + .save = ch7017_save, + .restore = ch7017_restore, + .destroy = ch7017_destroy, +}; diff --git a/drivers/gpu/drm/i915/dvo_ch7xxx.c b/drivers/gpu/drm/i915/dvo_ch7xxx.c new file mode 100644 index 0000000..77c8639 --- /dev/null +++ b/drivers/gpu/drm/i915/dvo_ch7xxx.c @@ -0,0 +1,368 @@ +/************************************************************************** + +Copyright © 2006 Dave Airlie + +All Rights Reserved. + +Permission is hereby granted, free of charge, to any person obtaining a +copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sub license, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice (including the +next paragraph) shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. +IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR +ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +**************************************************************************/ + +#include "dvo.h" + +#define CH7xxx_REG_VID 0x4a +#define CH7xxx_REG_DID 0x4b + +#define CH7011_VID 0x83 /* 7010 as well */ +#define CH7009A_VID 0x84 +#define CH7009B_VID 0x85 +#define CH7301_VID 0x95 + +#define CH7xxx_VID 0x84 +#define CH7xxx_DID 0x17 + +#define CH7xxx_NUM_REGS 0x4c + +#define CH7xxx_CM 0x1c +#define CH7xxx_CM_XCM (1<<0) +#define CH7xxx_CM_MCP (1<<2) +#define CH7xxx_INPUT_CLOCK 0x1d +#define CH7xxx_GPIO 0x1e +#define CH7xxx_GPIO_HPIR (1<<3) +#define CH7xxx_IDF 0x1f + +#define CH7xxx_IDF_HSP (1<<3) +#define CH7xxx_IDF_VSP (1<<4) + +#define CH7xxx_CONNECTION_DETECT 0x20 +#define CH7xxx_CDET_DVI (1<<5) + +#define CH7301_DAC_CNTL 0x21 +#define CH7301_HOTPLUG 0x23 +#define CH7xxx_TCTL 0x31 +#define CH7xxx_TVCO 0x32 +#define CH7xxx_TPCP 0x33 +#define CH7xxx_TPD 0x34 +#define CH7xxx_TPVT 0x35 +#define CH7xxx_TLPF 0x36 +#define CH7xxx_TCT 0x37 +#define CH7301_TEST_PATTERN 0x48 + +#define CH7xxx_PM 0x49 +#define CH7xxx_PM_FPD (1<<0) +#define CH7301_PM_DACPD0 (1<<1) +#define CH7301_PM_DACPD1 (1<<2) +#define CH7301_PM_DACPD2 (1<<3) +#define CH7xxx_PM_DVIL (1<<6) +#define CH7xxx_PM_DVIP (1<<7) + +#define CH7301_SYNC_POLARITY 0x56 +#define CH7301_SYNC_RGB_YUV (1<<0) +#define CH7301_SYNC_POL_DVI (1<<5) + +/** @file + * driver for the Chrontel 7xxx DVI chip over DVO. + */ + +static struct ch7xxx_id_struct { + uint8_t vid; + char *name; +} ch7xxx_ids[] = { + { CH7011_VID, "CH7011" }, + { CH7009A_VID, "CH7009A" }, + { CH7009B_VID, "CH7009B" }, + { CH7301_VID, "CH7301" }, +}; + +struct ch7xxx_reg_state { + uint8_t regs[CH7xxx_NUM_REGS]; +}; + +struct ch7xxx_priv { + bool quiet; + + struct ch7xxx_reg_state save_reg; + struct ch7xxx_reg_state mode_reg; + uint8_t save_TCTL, save_TPCP, save_TPD, save_TPVT; + uint8_t save_TLPF, save_TCT, save_PM, save_IDF; +}; + +static void ch7xxx_save(struct intel_dvo_device *dvo); + +static char *ch7xxx_get_id(uint8_t vid) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(ch7xxx_ids); i++) { + if (ch7xxx_ids[i].vid == vid) + return ch7xxx_ids[i].name; + } + + return NULL; +} + +/** Reads an 8 bit register */ +static bool ch7xxx_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch) +{ + struct ch7xxx_priv *ch7xxx= dvo->dev_priv; + struct intel_i2c_chan *i2cbus = dvo->i2c_bus; + u8 out_buf[2]; + u8 in_buf[2]; + + struct i2c_msg msgs[] = { + { + .addr = i2cbus->slave_addr, + .flags = 0, + .len = 1, + .buf = out_buf, + }, + { + .addr = i2cbus->slave_addr, + .flags = I2C_M_RD, + .len = 1, + .buf = in_buf, + } + }; + + out_buf[0] = addr; + out_buf[1] = 0; + + if (i2c_transfer(&i2cbus->adapter, msgs, 2) == 2) { + *ch = in_buf[0]; + return true; + }; + + if (!ch7xxx->quiet) { + DRM_DEBUG("Unable to read register 0x%02x from %s:%02x.\n", + addr, i2cbus->adapter.name, i2cbus->slave_addr); + } + return false; +} + +/** Writes an 8 bit register */ +static bool ch7xxx_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch) +{ + struct ch7xxx_priv *ch7xxx = dvo->dev_priv; + struct intel_i2c_chan *i2cbus = dvo->i2c_bus; + uint8_t out_buf[2]; + struct i2c_msg msg = { + .addr = i2cbus->slave_addr, + .flags = 0, + .len = 2, + .buf = out_buf, + }; + + out_buf[0] = addr; + out_buf[1] = ch; + + if (i2c_transfer(&i2cbus->adapter, &msg, 1) == 1) + return true; + + if (!ch7xxx->quiet) { + DRM_DEBUG("Unable to write register 0x%02x to %s:%d.\n", + addr, i2cbus->adapter.name, i2cbus->slave_addr); + } + + return false; +} + +static bool ch7xxx_init(struct intel_dvo_device *dvo, + struct intel_i2c_chan *i2cbus) +{ + /* this will detect the CH7xxx chip on the specified i2c bus */ + struct ch7xxx_priv *ch7xxx; + uint8_t vendor, device; + char *name; + + ch7xxx = kzalloc(sizeof(struct ch7xxx_priv), GFP_KERNEL); + if (ch7xxx == NULL) + return false; + + dvo->i2c_bus = i2cbus; + dvo->i2c_bus->slave_addr = dvo->slave_addr; + dvo->dev_priv = ch7xxx; + ch7xxx->quiet = true; + + if (!ch7xxx_readb(dvo, CH7xxx_REG_VID, &vendor)) + goto out; + + name = ch7xxx_get_id(vendor); + if (!name) { + DRM_DEBUG("ch7xxx not detected; got 0x%02x from %s slave %d.\n", + vendor, i2cbus->adapter.name, i2cbus->slave_addr); + goto out; + } + + + if (!ch7xxx_readb(dvo, CH7xxx_REG_DID, &device)) + goto out; + + if (device != CH7xxx_DID) { + DRM_DEBUG("ch7xxx not detected; got 0x%02x from %s slave %d.\n", + vendor, i2cbus->adapter.name, i2cbus->slave_addr); + goto out; + } + + ch7xxx->quiet = false; + DRM_DEBUG("Detected %s chipset, vendor/device ID 0x%02x/0x%02x\n", + name, vendor, device); + return true; +out: + kfree(ch7xxx); + return false; +} + +static enum drm_connector_status ch7xxx_detect(struct intel_dvo_device *dvo) +{ + uint8_t cdet, orig_pm, pm; + + ch7xxx_readb(dvo, CH7xxx_PM, &orig_pm); + + pm = orig_pm; + pm &= ~CH7xxx_PM_FPD; + pm |= CH7xxx_PM_DVIL | CH7xxx_PM_DVIP; + + ch7xxx_writeb(dvo, CH7xxx_PM, pm); + + ch7xxx_readb(dvo, CH7xxx_CONNECTION_DETECT, &cdet); + + ch7xxx_writeb(dvo, CH7xxx_PM, orig_pm); + + if (cdet & CH7xxx_CDET_DVI) + return connector_status_connected; + return connector_status_disconnected; +} + +static enum drm_mode_status ch7xxx_mode_valid(struct intel_dvo_device *dvo, + struct drm_display_mode *mode) +{ + if (mode->clock > 165000) + return MODE_CLOCK_HIGH; + + return MODE_OK; +} + +static void ch7xxx_mode_set(struct intel_dvo_device *dvo, + struct drm_display_mode *mode, + struct drm_display_mode *adjusted_mode) +{ + uint8_t tvco, tpcp, tpd, tlpf, idf; + + if (mode->clock <= 65000) { + tvco = 0x23; + tpcp = 0x08; + tpd = 0x16; + tlpf = 0x60; + } else { + tvco = 0x2d; + tpcp = 0x06; + tpd = 0x26; + tlpf = 0xa0; + } + + ch7xxx_writeb(dvo, CH7xxx_TCTL, 0x00); + ch7xxx_writeb(dvo, CH7xxx_TVCO, tvco); + ch7xxx_writeb(dvo, CH7xxx_TPCP, tpcp); + ch7xxx_writeb(dvo, CH7xxx_TPD, tpd); + ch7xxx_writeb(dvo, CH7xxx_TPVT, 0x30); + ch7xxx_writeb(dvo, CH7xxx_TLPF, tlpf); + ch7xxx_writeb(dvo, CH7xxx_TCT, 0x00); + + ch7xxx_readb(dvo, CH7xxx_IDF, &idf); + + idf &= ~(CH7xxx_IDF_HSP | CH7xxx_IDF_VSP); + if (mode->flags & DRM_MODE_FLAG_PHSYNC) + idf |= CH7xxx_IDF_HSP; + + if (mode->flags & DRM_MODE_FLAG_PVSYNC) + idf |= CH7xxx_IDF_HSP; + + ch7xxx_writeb(dvo, CH7xxx_IDF, idf); +} + +/* set the CH7xxx power state */ +static void ch7xxx_dpms(struct intel_dvo_device *dvo, int mode) +{ + if (mode == DRM_MODE_DPMS_ON) + ch7xxx_writeb(dvo, CH7xxx_PM, CH7xxx_PM_DVIL | CH7xxx_PM_DVIP); + else + ch7xxx_writeb(dvo, CH7xxx_PM, CH7xxx_PM_FPD); +} + +static void ch7xxx_dump_regs(struct intel_dvo_device *dvo) +{ + struct ch7xxx_priv *ch7xxx = dvo->dev_priv; + int i; + + for (i = 0; i < CH7xxx_NUM_REGS; i++) { + if ((i % 8) == 0 ) + DRM_DEBUG("\n %02X: ", i); + DRM_DEBUG("%02X ", ch7xxx->mode_reg.regs[i]); + } +} + +static void ch7xxx_save(struct intel_dvo_device *dvo) +{ + struct ch7xxx_priv *ch7xxx= dvo->dev_priv; + + ch7xxx_readb(dvo, CH7xxx_TCTL, &ch7xxx->save_TCTL); + ch7xxx_readb(dvo, CH7xxx_TPCP, &ch7xxx->save_TPCP); + ch7xxx_readb(dvo, CH7xxx_TPD, &ch7xxx->save_TPD); + ch7xxx_readb(dvo, CH7xxx_TPVT, &ch7xxx->save_TPVT); + ch7xxx_readb(dvo, CH7xxx_TLPF, &ch7xxx->save_TLPF); + ch7xxx_readb(dvo, CH7xxx_PM, &ch7xxx->save_PM); + ch7xxx_readb(dvo, CH7xxx_IDF, &ch7xxx->save_IDF); +} + +static void ch7xxx_restore(struct intel_dvo_device *dvo) +{ + struct ch7xxx_priv *ch7xxx = dvo->dev_priv; + + ch7xxx_writeb(dvo, CH7xxx_TCTL, ch7xxx->save_TCTL); + ch7xxx_writeb(dvo, CH7xxx_TPCP, ch7xxx->save_TPCP); + ch7xxx_writeb(dvo, CH7xxx_TPD, ch7xxx->save_TPD); + ch7xxx_writeb(dvo, CH7xxx_TPVT, ch7xxx->save_TPVT); + ch7xxx_writeb(dvo, CH7xxx_TLPF, ch7xxx->save_TLPF); + ch7xxx_writeb(dvo, CH7xxx_IDF, ch7xxx->save_IDF); + ch7xxx_writeb(dvo, CH7xxx_PM, ch7xxx->save_PM); +} + +static void ch7xxx_destroy(struct intel_dvo_device *dvo) +{ + struct ch7xxx_priv *ch7xxx = dvo->dev_priv; + + if (ch7xxx) { + kfree(ch7xxx); + dvo->dev_priv = NULL; + } +} + +struct intel_dvo_dev_ops ch7xxx_ops = { + .init = ch7xxx_init, + .detect = ch7xxx_detect, + .mode_valid = ch7xxx_mode_valid, + .mode_set = ch7xxx_mode_set, + .dpms = ch7xxx_dpms, + .dump_regs = ch7xxx_dump_regs, + .save = ch7xxx_save, + .restore = ch7xxx_restore, + .destroy = ch7xxx_destroy, +}; diff --git a/drivers/gpu/drm/i915/dvo_ivch.c b/drivers/gpu/drm/i915/dvo_ivch.c new file mode 100644 index 0000000..5907fda --- /dev/null +++ b/drivers/gpu/drm/i915/dvo_ivch.c @@ -0,0 +1,442 @@ +/* + * Copyright © 2006 Intel Corporation + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + * + * Authors: + * Eric Anholt <er...@an...> + * + */ + +#include "dvo.h" + +/* + * register definitions for the i82807aa. + * + * Documentation on this chipset can be found in datasheet #29069001 at + * intel.com. + */ + +/* + * VCH Revision & GMBus Base Addr + */ +#define VR00 0x00 +# define VR00_BASE_ADDRESS_MASK 0x007f + +/* + * Functionality Enable + */ +#define VR01 0x01 + +/* + * Enable the panel fitter + */ +# define VR01_PANEL_FIT_ENABLE (1 << 3) +/* + * Enables the LCD display. + * + * This must not be set while VR01_DVO_BYPASS_ENABLE is set. + */ +# define VR01_LCD_ENABLE (1 << 2) +/** Enables the DVO repeater. */ +# define VR01_DVO_BYPASS_ENABLE (1 << 1) +/** Enables the DVO clock */ +# define VR01_DVO_ENABLE (1 << 0) + +/* + * LCD Interface Format + */ +#define VR10 0x10 +/** Enables LVDS output instead of CMOS */ +# define VR10_LVDS_ENABLE (1 << 4) +/** Enables 18-bit LVDS output. */ +# define VR10_INTERFACE_1X18 (0 << 2) +/** Enables 24-bit LVDS or CMOS output */ +# define VR10_INTERFACE_1X24 (1 << 2) +/** Enables 2x18-bit LVDS or CMOS output. */ +# define VR10_INTERFACE_2X18 (2 << 2) +/** Enables 2x24-bit LVDS output */ +# define VR10_INTERFACE_2X24 (3 << 2) + +/* + * VR20 LCD Horizontal Display Size + */ +#define VR20 0x20 + +/* + * LCD Vertical Display Size + */ +#define VR21 0x20 + +/* + * Panel power down status + */ +#define VR30 0x30 +/** Read only bit indicating that the panel is not in a safe poweroff state. */ +# define VR30_PANEL_ON (1 << 15) + +#define VR40 0x40 +# define VR40_STALL_ENABLE (1 << 13) +# define VR40_VERTICAL_INTERP_ENABLE (1 << 12) +# define VR40_ENHANCED_PANEL_FITTING (1 << 11) +# define VR40_HORIZONTAL_INTERP_ENABLE (1 << 10) +# define VR40_AUTO_RATIO_ENABLE (1 << 9) +# define VR40_CLOCK_GATING_ENABLE (1 << 8) + +/* + * Panel Fitting Vertical Ratio + * (((image_height - 1) << 16) / ((panel_height - 1))) >> 2 + */ +#define VR41 0x41 + +/* + * Panel Fitting Horizontal Ratio + * (((image_width - 1) << 16) / ((panel_width - 1))) >> 2 + */ +#define VR42 0x42 + +/* + * Horizontal Image Size + */ +#define VR43 0x43 + +/* VR80 GPIO 0 + */ +#define VR80 0x80 +#define VR81 0x81 +#define VR82 0x82 +#define VR83 0x83 +#define VR84 0x84 +#define VR85 0x85 +#define VR86 0x86 +#define VR87 0x87 + +/* VR88 GPIO 8 + */ +#define VR88 0x88 + +/* Graphics BIOS scratch 0 + */ +#define VR8E 0x8E +# define VR8E_PANEL_TYPE_MASK (0xf << 0) +# define VR8E_PANEL_INTERFACE_CMOS (0 << 4) +# define VR8E_PANEL_INTERFACE_LVDS (1 << 4) +# define VR8E_FORCE_DEFAULT_PANEL (1 << 5) + +/* Graphics BIOS scratch 1 + */ +#define VR8F 0x8F +# define VR8F_VCH_PRESENT (1 << 0) +# define VR8F_DISPLAY_CONN (1 << 1) +# define VR8F_POWER_MASK (0x3c) +# define VR8F_POWER_POS (2) + + +struct ivch_priv { + bool quiet; + + uint16_t width, height; + + uint16_t save_VR01; + uint16_t save_VR40; +}; + + +static void ivch_dump_regs(struct intel_dvo_device *dvo); + +/** + * Reads a register on the ivch. + * + * Each of the 256 registers are 16 bits long. + */ +static bool ivch_read(struct intel_dvo_device *dvo, int addr, uint16_t *data) +{ + struct ivch_priv *priv = dvo->dev_priv; + struct intel_i2c_chan *i2cbus = dvo->i2c_bus; + u8 out_buf[1]; + u8 in_buf[2]; + + struct i2c_msg msgs[] = { + { + .addr = i2cbus->slave_addr, + .flags = I2C_M_RD, + .len = 0, + }, + { + .addr = 0, + .flags = I2C_M_NOSTART, + .len = 1, + .buf = out_buf, + }, + { + .addr = i2cbus->slave_addr, + .flags = I2C_M_RD | I2C_M_NOSTART, + .len = 2, + .buf = in_buf, + } + }; + + out_buf[0] = addr; + + if (i2c_transfer(&i2cbus->adapter, msgs, 3) == 3) { + *data = (in_buf[1] << 8) | in_buf[0]; + return true; + }; + + if (!priv->quiet) { + DRM_DEBUG("Unable to read register 0x%02x from %s:%02x.\n", + addr, i2cbus->adapter.name, i2cbus->slave_addr); + } + return false; +} + +/** Writes a 16-bit register on the ivch */ +static bool ivch_write(struct intel_dvo_device *dvo, int addr, uint16_t data) +{ + struct ivch_priv *priv = dvo->dev_priv; + struct intel_i2c_chan *i2cbus = dvo->i2c_bus; + u8 out_buf[3]; + struct i2c_msg msg = { + .addr = i2cbus->slave_addr, + .flags = 0, + .len = 3, + .buf = out_buf, + }; + + out_buf[0] = addr; + out_buf[1] = data & 0xff; + out_buf[2] = data >> 8; + + if (i2c_transfer(&i2cbus->adapter, &msg, 1) == 1) + return true; + + if (!priv->quiet) { + DRM_DEBUG("Unable to write register 0x%02x to %s:%d.\n", + addr, i2cbus->adapter.name, i2cbus->slave_addr); + } + + return false; +} + +/** Probes the given bus and slave address for an ivch */ +static bool ivch_init(struct intel_dvo_device *dvo, + struct intel_i2c_chan *i2cbus) +{ + struct ivch_priv *priv; + uint16_t temp; + + priv = kzalloc(sizeof(struct ivch_priv), GFP_KERNEL); + if (priv == NULL) + return false; + + dvo->i2c_bus = i2cbus; + dvo->i2c_bus->slave_addr = dvo->slave_addr; + dvo->dev_priv = priv; + priv->quiet = true; + + if (!ivch_read(dvo, VR00, &temp)) + goto out; + priv->quiet = false; + + /* Since the identification bits are probably zeroes, which doesn't seem + * very unique, check that the value in the base address field matches + * the address it's responding on. + */ + if ((temp & VR00_BASE_ADDRESS_MASK) != dvo->slave_addr) { + DRM_DEBUG("ivch detect failed due to address mismatch " + "(%d vs %d)\n", + (temp & VR00_BASE_ADDRESS_MASK), dvo->slave_addr); + goto out; + } + + ivch_read(dvo, VR20, &priv->width); + ivch_read(dvo, VR21, &priv->height); + + return true; + +out: + kfree(priv); + return false; +} + +static enum drm_connector_status ivch_detect(struct intel_dvo_device *dvo) +{ + return connector_status_connected; +} + +static enum drm_mode_status ivch_mode_valid(struct intel_dvo_device *dvo, + struct drm_display_mode *mode) +{ + if (mode->clock > 112000) + return MODE_CLOCK_HIGH; + + return MODE_OK; +} + +/** Sets the power state of the panel connected to the ivch */ +static void ivch_dpms(struct intel_dvo_device *dvo, int mode) +{ + int i; + uint16_t vr01, vr30, backlight; + + /* Set the new power state of the panel. */ + if (!ivch_read(dvo, VR01, &vr01)) + return; + + if (mode == DRM_MODE_DPMS_ON) + backlight = 1; + else + backlight = 0; + ivch_write(dvo, VR80, backlight); + + if (mode == DRM_MODE_DPMS_ON) + vr01 |= VR01_LCD_ENABLE | VR01_DVO_ENABLE; + else + vr01 &= ~(VR01_LCD_ENABLE | VR01_DVO_ENABLE); + + ivch_write(dvo, VR01, vr01); + + /* Wait for the panel to make its state transition */ + for (i = 0; i < 100; i++) { + if (!ivch_read(dvo, VR30, &vr30)) + break; + + if (((vr30 & VR30_PANEL_ON) != 0) == (mode == DRM_MODE_DPMS_ON)) + break; + udelay(1000); + } + /* wait some more; vch may fail to resync sometimes without this */ + udelay(16 * 1000); +} + +static void ivch_mode_set(struct intel_dvo_device *dvo, + struct drm_display_mode *mode, + struct drm_display_mode *adjusted_mode) +{ + uint16_t vr40 = 0; + uint16_t vr01; + + vr01 = 0; + vr40 = (VR40_STALL_ENABLE | VR40_VERTICAL_INTERP_ENABLE | + VR40_HORIZONTAL_INTERP_ENABLE); + + if (mode->hdisplay != adjusted_mode->hdisplay || + mode->vdisplay != adjusted_mode->vdisplay) { + uint16_t x_ratio, y_ratio; + + vr01 |= VR01_PANEL_FIT_ENABLE; + vr40 |= VR40_CLOCK_GATING_ENABLE; + x_ratio = (((mode->hdisplay - 1) << 16) / + (adjusted_mode->hdisplay - 1)) >> 2; + y_ratio = (((mode->vdisplay - 1) << 16) / + (adjusted_mode->vdisplay - 1)) >> 2; + ivch_write (dvo, VR42, x_ratio); + ivch_write (dvo, VR41, y_ratio); + } else { + vr01 &= ~VR01_PANEL_FIT_ENABLE; + vr40 &= ~VR40_CLOCK_GATING_ENABLE; + } + vr40 &= ~VR40_AUTO_RATIO_ENABLE; + + ivch_write(dvo, VR01, vr01); + ivch_write(dvo, VR40, vr40); + + ivch_dump_regs(dvo); +} + +static void ivch_dump_regs(struct intel_dvo_device *dvo) +{ + uint16_t val; + + ivch_read(dvo, VR00, &val); + DRM_DEBUG("VR00: 0x%04x\n", val); + ivch_read(dvo, VR01, &val); + DRM_DEBUG("VR01: 0x%04x\n", val); + ivch_read(dvo, VR30, &val); + DRM_DEBUG("VR30: 0x%04x\n", val); + ivch_read(dvo, VR40, &val); + DRM_DEBUG("VR40: 0x%04x\n", val); + + /* GPIO registers */ + ivch_read(dvo, VR80, &val); + DRM_DEBUG("VR80: 0x%04x\n", val); + ivch_read(dvo, VR81, &val); + DRM_DEBUG("VR81: 0x%04x\n", val); + ivch_read(dvo, VR82, &val); + DRM_DEBUG("VR82: 0x%04x\n", val); + ivch_read(dvo, VR83, &val); + DRM_DEBUG("VR83: 0x%04x\n", val); + ivch_read(dvo, VR84, &val); + DRM_DEBUG("VR84: 0x%04x\n", val); + ivch_read(dvo, VR85, &val); + DRM_DEBUG("VR85: 0x%04x\n", val); + ivch_read(dvo, VR86, &val); + DRM_DEBUG("VR86: 0x%04x\n", val); + ivch_read(dvo, VR87, &val); + DRM_DEBUG("VR87: 0x%04x\n", val); + ivch_read(dvo, VR88, &val); + DRM_DEBUG("VR88: 0x%04x\n", val); + + /* Scratch register 0 - AIM Panel type */ + ivch_read(dvo, VR8E, &val); + DRM_DEBUG("VR8E: 0x%04x\n", val); + + /* Scratch register 1 - Status register */ + ivch_read(dvo, VR8F, &val); + DRM_DEBUG("VR8F: 0x%04x\n", val); +} + +static void ivch_save(struct intel_dvo_device *dvo) +{ + struct ivch_priv *priv = dvo->dev_priv; + + ivch_read(dvo, VR01, &priv->save_VR01); + ivch_read(dvo, VR40, &priv->save_VR40); +} + +static void ivch_restore(struct intel_dvo_device *dvo) +{ + struct ivch_priv *priv = dvo->dev_priv; + + ivch_write(dvo, VR01, priv->save_VR01); + ivch_write(dvo, VR40, priv->save_VR40); +} + +static void ivch_destroy(struct intel_dvo_device *dvo) +{ + struct ivch_priv *priv = dvo->dev_priv; + + if (priv) { + kfree(priv); + dvo->dev_priv = NULL; + } +} + +struct intel_dvo_dev_ops ivch_ops= { + .init = ivch_init, + .dpms = ivch_dpms, + .save = ivch_save, + .restore = ivch_restore, + .mode_valid = ivch_mode_valid, + .mode_set = ivch_mode_set, + .detect = ivch_detect, + .dump_regs = ivch_dump_regs, + .destroy = ivch_destroy, +}; diff --git a/drivers/gpu/drm/i915/dvo_sil164.c b/drivers/gpu/drm/i915/dvo_sil164.c new file mode 100644 index 0000000..033a4bb --- /dev/null +++ b/drivers/gpu/drm/i915/dvo_sil164.c @@ -0,0 +1,302 @@ +/************************************************************************** + +Copyright © 2006 Dave Airlie + +All Rights Reserved. + +Permission is hereby granted, free of charge, to any person obtaining a +copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sub license, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice (including the +next paragraph) shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS +OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. +IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR +ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +**************************************************************************/ + +#include "dvo.h" + +#define SIL164_VID 0x0001 +#define SIL164_DID 0x0006 + +#define SIL164_VID_LO 0x00 +#define SIL164_VID_HI 0x01 +#define SIL164_DID_LO 0x02 +#define SIL164_DID_HI 0x03 +#define SIL164_REV 0x04 +#define SIL164_RSVD 0x05 +#define SIL164_FREQ_LO 0x06 +#define SIL164_FREQ_HI 0x07 + +#define SIL164_REG8 0x08 +#define SIL164_8_VEN (1<<5) +#define SIL164_8_HEN (1<<4) +#define SIL164_8_DSEL (1<<3) +#define SIL164_8_BSEL (1<<2) +#define SIL164_8_EDGE (1<<1) +#define SIL164_8_PD (1<<0) + +#define SIL164_REG9 0x09 +#define SIL164_9_VLOW (1<<7) +#define SIL164_9_MSEL_MASK (0x7<<4) +#define SIL164_9_TSEL (1<<3) +#define SIL164_9_RSEN (1<<2) +#define SIL164_9_HTPLG (1<<1) +#define SIL164_9_MDI (1<<0) + +#define SIL164_REGC 0x0c + +struct sil164_save_rec { + uint8_t reg8; + uint8_t reg9; + uint8_t regc; +}; + +struct sil164_priv { + //I2CDevRec d; + bool quiet; + struct sil164_save_rec save_regs; + struct sil164_save_rec mode_regs; +}; + +#define SILPTR(d) ((SIL164Ptr)(d->DriverPrivate.ptr)) + +static bool sil164_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch) +{ + struct sil164_priv *sil = dvo->dev_priv; + struct intel_i2c_chan *i2cbus = dvo->i2c_bus; + u8 out_buf[2]; + u8 in_buf[2]; + + struct i2c_msg msgs[] = { + { + .addr = i2cbus->slave_addr, + .flags = 0, + .len = 1, + .buf = out_buf, + }, + { + .addr = i2cbus->slave_addr, + .flags = I2C_M_RD, + .len = 1, + .buf = in_buf, + } + }; + + out_buf[0] = addr; + out_buf[1] = 0; + + if (i2c_transfer(&i2cbus->adapter, msgs, 2) == 2) { + *ch = in_buf[0]; + return true; + }; + + if (!sil->quiet) { + DRM_DEBUG("Unable to read register 0x%02x from %s:%02x.\n", + addr, i2cbus->adapter.name, i2cbus->slave_addr); + } + return false; +} + +static bool sil164_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch) +{ + struct sil164_priv *sil= dvo->dev_priv; + struct intel_i2c_chan *i2cbus = dvo->i2c_bus; + uint8_t out_buf[2]; + struct i2c_msg msg = { + .addr = i2cbus->slave_addr, + .flags = 0, + .len = 2, + .buf = out_buf, + }; + + out_buf[0] = addr; + out_buf[1] = ch; + + if (i2c_transfer(&i2cbus->adapter, &msg, 1) == 1) + return true; + + if (!sil->quiet) { + DRM_DEBUG("Unable to write register 0x%02x to %s:%d.\n", + addr, i2cbus->adapter.name, i2cbus->slave_addr); + } + + return false; +} + +/* Silicon Image 164 driver for chip on i2c bus */ +static bool sil164_init(struct intel_dvo_device *dvo, + struct intel_i2c_chan *i2cbus) +{ + /* this will detect the SIL164 chip on the specified i2c bus */ + struct sil164_priv *sil; + unsigned char ch; + + sil = kzalloc(sizeof(struct sil164_priv), GFP_KERNEL); + if (sil == NULL) + return false; + + dvo->i2c_bus = i2cbus; + dvo->i2c_bus->slave_addr = dvo->slave_addr; + dvo->dev_priv = sil; + sil->quiet = true; + + if (!sil164_readb(dvo, SIL164_VID_LO, &ch)) + goto out; + + if (ch != (SIL164_VID & 0xff)) { + DRM_DEBUG("sil164 not detected got %d: from %s Slave %d.\n", + ch, i2cbus->adapter.name, i2cbus->slave_addr); + goto out; + } + + if (!sil164_readb(dvo, SIL164_DID_LO, &ch)) + goto out; + + if (ch != (SIL164_DID & 0xff)) { + DRM_DEBUG("sil164 not detected got %d: from %s Slave %d.\n", + ch, i2cbus->adapter.name, i2cbus->slave_addr); + goto out; + } + sil->quiet = false; + + DRM_DEBUG("init sil164 dvo controller successfully!\n"); + return true; + +out: + kfree(sil); + return false; +} + +static enum drm_connector_status sil164_detect(struct intel_dvo_device *dvo) +{ + uint8_t reg9; + + sil164_readb(dvo, SIL164_REG9, ®9); + + if (reg9 & SIL164_9_HTPLG) + return connector_status_connected; + else + return connector_status_disconnected; +} + +static enum drm_mode_status sil164_mode_valid(struct intel_dvo_device *dvo, + struct drm_display_mode *mode) +{ + return MODE_OK; +} + +static void sil164_mode_set(struct intel_dvo_device *dvo, + struct drm_display_mode *mode, + struct drm_display_mode *adjusted_mode) +{ + /* As long as the basics are set up, since we don't have clock + * dependencies in the mode setup, we can just leave the + * registers alone and everything will work fine. + */ + /* recommended programming sequence from doc */ + /*sil164_writeb(sil, 0x08, 0x30); + sil164_writeb(sil, 0x09, 0x00); + sil164_writeb(sil, 0x0a, 0x90); + sil164_writeb(sil, 0x0c, 0x89); + sil164_writeb(sil, 0x08, 0x31);*/ + /* don't do much */ + return; +} + +/* set the SIL164 power state */ +static void sil164_dpms(struct intel_dvo_device *dvo, int mode) +{ + int ret; + unsigned char ch; + + ret = sil164_readb(dvo, SIL164_REG8, &ch); + if (ret == false) + return; + + if (mode == DRM_MODE_DPMS_ON) + ch |= SIL164_8_PD; + else + ch &= ~SIL164_8_PD; + + sil164_writeb(dvo, SIL164_REG8, ch); + return; +} + +static void sil164_dump_regs(struct intel_dvo_device *dvo) +{ + uint8_t val; + + sil164_readb(dvo, SIL164_FREQ_LO, &val); + DRM_DEBUG("SIL164_FREQ_LO: 0x%02x\n", val); + sil164_readb(dvo, SIL164_FREQ_HI, &val); + DRM_DEBUG("SIL164_FREQ_HI: 0x%02x\n", val); + sil164_readb(dvo, SIL164_REG8, &val); + DRM_DEBUG("SIL164_REG8: 0x%02x\n", val); + sil164_readb(dvo, SIL164_REG9, &val); + DRM_DEBUG("SIL164_REG9: 0x%02x\n", val); + sil164_readb(dvo, SIL164_REGC, &val); + DRM_DEBUG("SIL164_REGC: 0x%02x\n", val); +} + +static void sil164_save(struct intel_dvo_device *dvo) +{ + struct sil164_priv *sil= dvo->dev_priv; + + if (!sil164_readb(dvo, SIL164_REG8, &sil->save_regs.reg8)) + return; + + if (!sil164_readb(dvo, SIL164_REG9, &sil->save_regs.reg9)) + return; + + if (!sil164_readb(dvo, SIL164_REGC, &sil->save_regs.regc)) + return; + + return; +} + +static void sil164_restore(struct intel_dvo_device *dvo) +{ + struct sil164_priv *sil = dvo->dev_priv; + + /* Restore it powered down initially */ + sil164_writeb(dvo, SIL164_REG8, sil->save_regs.reg8 & ~0x1); + + sil164_writeb(dvo, SIL164_REG9, sil->save_regs.reg9); + sil164_writeb(dvo, SIL164_REGC, sil->save_regs.regc); + sil164_writeb(dvo, SIL164_REG8, sil->save_regs.reg8); +} + +static void sil164_destroy(struct intel_dvo_device *dvo) +{ + struct sil164_priv *sil = dvo->dev_priv; + + if (sil) { + kfree(sil); + dvo->dev_priv = NULL; + } +} + +struct intel_dvo_dev_ops sil164_ops = { + .init = sil164_init, + .detect = sil164_detect, + .mode_valid = sil164_mode_valid, + .mode_set = sil164_mode_set, + .dpms = sil164_dpms, + .dump_regs = sil164_dump_regs, + .save = sil164_save, + .restore = sil164_restore, + .destroy = sil164_destroy, +}; diff --git a/drivers/gpu/drm/i915/dvo_tfp410.c b/drivers/gpu/drm/i915/dvo_tfp410.c new file mode 100644 index 0000000..207fda8 --- /dev/null +++ b/drivers/gpu/drm/i915/dvo_tfp410.c @@ -0,0 +1,335 @@ +/* + * Copyright © 2007 Dave Mueller + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + * IN THE SOFTWARE. + * + * Authors: + * Dave Mueller <dav...@gm...> + * + */ + +#include "dvo.h" + +/* register definitions according to the TFP410 data sheet */ +#define TFP410_VID 0x014C +#define TFP410_DID 0x0410 + +#define TFP410_VID_LO 0x00 +#define TFP410_VID_HI 0x01 +#define TFP410_DID_LO 0x02 +#define TFP410_DID_HI 0x03 +#define TFP410_REV 0x04 + +#define TFP410_CTL_1 0x08 +#define TFP410_CTL_1_TDIS (1<<6) +#define TFP410_CTL_1_VEN (1<<5) +#define TFP410_CTL_1_HEN (1<<4) +#define TFP410_CTL_1_DSEL (1<<3) +#define TFP410_CTL_1_BSEL (1<<2) +#define TFP410_CTL_1_EDGE (1<<1) +#define TFP410_CTL_1_PD (1<<0) + +#define TFP410_CTL_2 0x09 +#define TFP410_CTL_2_VLOW (1<<7) +#define TFP410_CTL_2_MSEL_MASK (0x7<<4) +#define TFP410_CTL_2_MSEL (1<<4) +#define TFP410_CTL_2_TSEL (1<<3) +#define TFP410_CTL_2_RSEN (1<<2) +#define TFP410_CTL_2_HTPLG (1<<1) +#define TFP410_CTL_2_MDI (1<<0) + +#define TFP410_CTL_3 0x0A +#define TFP410_CTL_3_DK_MASK (0x7<<5) +#define TFP410_CTL_3_DK (1<<5) +#define TFP410_CTL_3_DKEN (1<<4) +#define TFP410_CTL_3_CTL_MASK (0x7<<1) +#define TFP410_CTL_3_CTL (1<<1) + +#define TFP410_USERCFG 0x0B + +#define TFP410_DE_DLY 0x32 + +#define TFP410_DE_CTL 0x33 +#define TFP410_DE_CTL_DEGEN (1<<6) +#define TFP410_DE_CTL_VSPOL (1<<5) +#define TFP410_DE_CTL_HSPOL (1<<4) +#define TFP410_DE_CTL_DEDLY8 (1<<0) + +#define TFP410_DE_TOP 0x34 + +#define TFP410_DE_CNT_LO 0x36 +#define TFP410_DE_CNT_HI 0x37 + +#define TFP410_DE_LIN_LO 0x38 +#define TFP410_DE_LIN_HI 0x39 + +#define TFP410_H_RES_LO 0x3A +#define TFP410_H_RES_HI 0x3B + +#define TFP410_V_RES_LO 0x3C +#define TFP410_V_RES_HI 0x3D + +struct tfp410_save_rec { + uint8_t ctl1; + uint8_t ctl2; +}; + +struct tfp410_priv { + bool quiet; + + struct tfp410_save_rec saved_reg; + struct tfp410_save_rec mode_reg; +}; + +static bool tfp410_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch) +{ + struct tfp410_priv *tfp = dvo->dev_priv; + struct intel_i2c_chan *i2cbus = dvo->i2c_bus; + u8 out_buf[2]; + u8 in_buf[2]; + + struct i2c_msg msgs[] = { + { + .addr = i2cbus->slave_addr, + .flags = 0, + .len = 1, + .buf = out_buf, + }, + { + .addr = i2cbus->slave_addr, + .flags = I2C_M_RD, + .len = 1, + .buf = in_buf, + } + }; + + out_buf[0] = addr; + out_buf[1] = 0; + + if (i2c_transfer(&i2cbus->adapter, msgs, 2) == 2) { + *ch = in_buf[0]; + return true; + }; + + if (!tfp->quiet) { + DRM_DEBUG("Unable to read register 0x%02x from %s:%02x.\n", + addr, i2cbus->adapter.name, i2cbus->slave_addr); + } + return false; +} + +static bool tfp410_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch) +{ + struct tfp410_priv *tfp = dvo->dev_priv; + struct intel_i2c_chan *i2cbus = dvo->i2c_bus; + uint8_t out_buf[2]; + struct i2c_msg msg = { + .addr = i2cbus->slave_addr, + .flags = 0, + .len = 2, + .buf = out_buf, + }; + + out_buf[0] = addr; + out_buf[1] = ch; + + if (i2c_transfer(&i2cbus->adapter, &msg, 1) == 1) + return true; + + if (!tfp->quiet) { + DRM_DEBUG("Unable to write register 0x%02x to %s:%d.\n", + addr, i2cbus->adapter.name, i2cbus->slave_addr); + } + + return false; +} + +static int tfp410_getid(struct intel_dvo_device *dvo, int addr) +{ + uint8_t ch1, ch2; + + if (tfp410_readb(dvo, addr+0, &ch1) && + tfp410_readb(dvo, addr+1, &ch2)) + return ((ch2 << 8) & 0xFF00) | (ch1 & 0x00FF); + + return -1; +} + +/* Ti TFP410 driver for chip on i2c bus */ +static bool tfp410_init(struct intel_dvo_device *dvo, + struct intel_i2c_chan *i2cbus) +{ + /* this will detect the tfp410 chip on the specified i2c bus */ + struct tfp410_priv *tfp; + int id; + + tfp = kzalloc(sizeof(struct tfp410_priv), GFP_KERNEL); + if (tfp == NULL) + return false; + + dvo->i2c_bus = i2cbus; + dvo->i2c_bus->slave_addr = dvo->slave_addr; + dvo->dev_priv = tfp; + tfp->quiet = true; + + if ((id = tfp410_getid(dvo, TFP410_VID_LO)) != TFP410_VID) { + DRM_DEBUG("tfp410 not detected got VID %X: from %s Slave %d.\n", + id, i2cbus->adapter.name, i2cbus->slave_addr); + goto out; + } + + if ((id = tfp410_getid(dvo, TFP410_DID_LO)) != TFP410_DID) { + DRM_DEBUG("tfp410 not detected got DID %X: from %s Slave %d.\n", + id, i2cbus->adapter.name, i2cbus->slave_addr); + goto out; + } + tfp->quiet = false; + return true; +out: + kfree(tfp); + return false; +} + +static enum drm_connector_status tfp410_detect(struct intel_dvo_device *dvo) +{ + enum drm_connector_status ret = connector_status_disconnected; + uint8_t ctl2; + + if (tfp410_readb(dvo, TFP410_CTL_2, &ctl2)) { + if (ctl2 & TFP410_CTL_2_HTPLG) + ret = connector_status_connected; + else + ret = connector_status_disconnected; + } + + return ret; +} + +static enum drm_mode_status tfp410_mode_valid(struct intel_dvo_device *dvo, + struct drm_display_mode *mode) +{ + return MODE_OK; +} + +static void tfp410_mode_set(struct intel_dvo_device *dvo, + struct drm_display_mode *mode, + struct drm_display_mode *adjusted_mode) +{ + /* As long as the basics are set up, since we don't have clock dependencies + * in the mode setup, we can just leave the registers alone and everything + * will work fine. + */ + /* don't do much */ + return; +} + +/* set the tfp410 power state */ +static void tfp410_dpms(struct intel_dvo_device *dvo, int mode) +{ + uint8_t ctl1; + + if (!tfp410_readb(dvo, TFP410_CTL_1, &ctl1)) + return; + + if (mode == DRM_MODE_DPMS_ON) + ctl1 |= TFP410_CTL_1_PD; + else + ctl1 &= ~TFP410_CTL_1_PD; + + tfp410_writeb(dvo, TFP410_CTL_1, ctl1); +} + +static void tfp410_dump_regs(struct intel_dvo_device *dvo) +{ + uint8_t val, val2; + + tfp410_readb(dvo, TFP410_REV, &val); + DRM_DEBUG("TFP410_REV: 0x%02X\n", val); + tfp410_readb(dvo, TFP410_CTL_1, &val); + DRM_DEBUG("TFP410_CTL1: 0x%02X\n", val); + tfp410_readb(dvo, TFP410_CTL_2, &val); + DRM_DEBUG("TFP410_CTL2: 0x%02X\n", val); + tfp410_readb(dvo, TFP410_CTL_3, &val); + DRM_DEBUG("TFP410_CTL3: 0x%02X\n", val); + tfp410_readb(dvo, TFP410_USERCFG, &val); + DRM_DEBUG("TFP410_USERCFG: 0x%02X\n", val); + tfp410_readb(dvo, TFP410_DE_DLY, &val); + DRM_DEBUG("TFP410_DE_DLY: 0x%02X\n", val); + tfp410_readb(dvo, TFP410_DE_CTL, &val); + DRM_DEBUG("TFP410_DE_CTL: 0x%02X\n", val); + tfp410_readb(dvo, TFP410_DE_TOP, &val); + DRM_DEBUG("TFP410_DE_TOP: 0x%02X\n", val); + tfp410_readb(dvo, TFP410_DE_CNT_LO, &val); + tfp410_readb(dvo, TFP410_DE_CNT_HI, &val2); + DRM_DEBUG("TFP410_DE_CNT: 0x%02X%02X\n", val2, val); + tfp410_readb(dvo, TFP410_DE_LIN_LO, &val); + tfp410_readb(dvo, TFP410_DE_LIN_HI, &val2); + DRM_DEBUG("TFP410_DE_LIN: 0x%02X%02X\n", val2, val); + tfp410_readb(dvo, TFP410_H_RES_LO, &val); + tfp410_readb(dvo, TFP410_H_RES_HI, &val2); + DRM_DEBUG("TFP410_H_RES: 0x%02X%02X\n", val2, val); + tfp410_readb(dvo, TFP410_V_RES_LO, &val); + tfp410_readb(dvo, TFP410_V_RES_HI, &val2); + DRM_DEBUG("TFP410_V_RES: 0x%02X%02X\n", val2, val); +} + +static void tfp410_save(struct intel_dvo_device *dvo) +{ + struct tfp410_priv *tfp = dvo->dev_priv; + + if (!tfp410_readb(dvo, TFP410_CTL_1, &tfp->saved_reg.ctl1)) + return; + + if (!tfp410_readb(dvo, TFP410_CTL_2, &tfp->saved_reg.ctl2)) + return; +} + +static void tfp410_restore(struct intel_dvo_device *dvo) +{ + struct tfp410_priv *tfp = dvo->dev_priv; + + /* Restore it powered down initially */ + tfp410_writeb(dvo, TFP410_CTL_1, tfp->saved_reg.ctl1 & ~0x1); + + tfp410_writeb(dvo, TFP410_CTL_2, tfp->saved_reg.ctl2); + tfp410_writeb(dvo, TFP410_CTL_1, tfp->saved_reg.ctl1); +} + +static void tfp410_destroy(struct intel_dvo_device *dvo) +{ + struct tfp410_priv *tfp = dvo->dev_priv; + + if (tfp) { + kfree(tfp); + dvo->dev_priv = NULL; + } +} + +struct intel_dvo_dev_ops tfp410_ops = { + .init = tfp410_init, + .detect = tfp410_detect, + .mode_valid = tfp410_mode_valid, + .mode_set = tfp410_mode_set, + .dpms = tfp410_dpms, + .dump_regs = tfp410_dump_regs, + .save = tfp410_save, + .restore = tfp410_restore, + .destroy = tfp410_destroy, +}; diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c index ab1b6f9..b52ef43 100644 --- a/drivers/gpu/drm/i915/i915_dma.c +++ b/drivers/gpu/drm/i915/i915_dma.c @@ -39,6 +39,7 @@ int i915_wait_ring(struct drm_device * dev, int n, const char *caller) { drm_i915_private_t *dev_priv = dev->dev_private; + struct drm_i915_master_private *master_priv = dev->primary->master->driver_priv; drm_i915_ring_buffer_t *ring = &(dev_priv->ring); u32 acthd_reg = IS_I965G(dev) ? ACTHD_I965 : ACTHD; u32 last_acthd = I915_READ(acthd_reg); @@ -55,8 +56,8 @@ int i915_wait_ring(struct drm_device * dev, int n, const char *caller) if (ring->space >= n) return 0; - if (dev_priv->sarea_priv) - dev_pr... [truncated message content] |
From: Maarten M. <mad...@gm...> - 2008-10-30 23:33:57
|
On Thu, Oct 30, 2008 at 10:08 PM, Jesse Barnes <jb...@vi...> wrote: > This commit adds the core mode setting routines for use by DRM drivers to > manage outputs and displays. Originally based on the X.Org Randr 1.2 > implementation, the code has since been heavily changed by Dave Airlie > with contributions by Jesse Barnes, Jakob Bornecrantz and others. > > This one should probably be split up a bit; I think the TTM stuff in > particular could be factored out fairly easily. > > diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c > index 738fd0f..31ce044 100644 > --- a/arch/x86/mm/pat.c > +++ b/arch/x86/mm/pat.c > @@ -11,6 +11,7 @@ > #include <linux/bootmem.h> > #include <linux/debugfs.h> > #include <linux/kernel.h> > +#include <linux/module.h> > #include <linux/gfp.h> > #include <linux/mm.h> > #include <linux/fs.h> > @@ -29,6 +30,7 @@ > > #ifdef CONFIG_X86_PAT > int __read_mostly pat_enabled = 1; > +EXPORT_SYMBOL_GPL(pat_enabled); > > void __cpuinit pat_disable(char *reason) > { > diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig > index a8b33c2..6723182 100644 > --- a/drivers/gpu/drm/Kconfig > +++ b/drivers/gpu/drm/Kconfig > @@ -41,6 +41,14 @@ config DRM_RADEON > > If M is selected, the module will be called radeon. > > +config DRM_RADEON_KMS > + bool "Enable modesetting on radeon by default" > + depends on DRM_RADEON > + help > + Choose this option if you want kernel modesetting enabled by default, > + and you have a new enough userspace to support this. Running old > + userspaces with this enabled will cause pain. > + > config DRM_I810 > tristate "Intel I810" > depends on DRM && AGP && AGP_INTEL > @@ -76,6 +84,15 @@ config DRM_I915 > > endchoice > > +config DRM_I915_KMS > + bool "Enable modesetting on intel by default" > + depends on DRM_I915 > + help > + Choose this option if you want kernel modesetting enabled by default, > + and you have a new enough userspace to support this. Running old > + userspaces with this enabled will cause pain. > + > + > config DRM_MGA > tristate "Matrox g200/g400" > depends on DRM > diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile > index 74da994..48567a9 100644 > --- a/drivers/gpu/drm/Makefile > +++ b/drivers/gpu/drm/Makefile > @@ -9,7 +9,9 @@ drm-y := drm_auth.o drm_bufs.o drm_cache.o \ > drm_drv.o drm_fops.o drm_gem.o drm_ioctl.o drm_irq.o \ > drm_lock.o drm_memory.o drm_proc.o drm_stub.o drm_vm.o \ > drm_agpsupport.o drm_scatter.o ati_pcigart.o drm_pci.o \ > - drm_sysfs.o drm_hashtab.o drm_sman.o drm_mm.o > + drm_sysfs.o drm_hashtab.o drm_sman.o drm_mm.o \ > + drm_fence.o drm_bo.o drm_ttm.o drm_bo_move.o \ > + drm_crtc.o drm_crtc_helper.o drm_modes.o drm_edid.o > > drm-$(CONFIG_COMPAT) += drm_ioc32.o > > diff --git a/drivers/gpu/drm/ati_pcigart.c b/drivers/gpu/drm/ati_pcigart.c > index c533d0c..adc57dd 100644 > --- a/drivers/gpu/drm/ati_pcigart.c > +++ b/drivers/gpu/drm/ati_pcigart.c > @@ -34,9 +34,55 @@ > #include "drmP.h" > > # define ATI_PCIGART_PAGE_SIZE 4096 /**< PCI GART page size */ > +# define ATI_PCIGART_PAGE_MASK (~(ATI_PCIGART_PAGE_SIZE-1)) > > -static int drm_ati_alloc_pcigart_table(struct drm_device *dev, > - struct drm_ati_pcigart_info *gart_info) > +#define ATI_PCIE_WRITE 0x4 > +#define ATI_PCIE_READ 0x8 > + > +static __inline__ void gart_insert_page_into_table(struct drm_ati_pcigart_info *gart_info, dma_addr_t > addr, volatile u32 *pci_gart) > +{ > + u32 page_base; > + > + page_base = (u32)addr & ATI_PCIGART_PAGE_MASK; > + switch(gart_info->gart_reg_if) { > + case DRM_ATI_GART_IGP: > + page_base |= (upper_32_bits(addr) & 0xff) << 4; > + page_base |= 0xc; > + break; > + case DRM_ATI_GART_PCIE: > + page_base >>= 8; > + page_base |= (upper_32_bits(addr) & 0xff) << 24; > + page_base |= ATI_PCIE_READ | ATI_PCIE_WRITE; > + break; > + default: > + case DRM_ATI_GART_PCI: > + break; > + } > + *pci_gart = cpu_to_le32(page_base); > +} > + > +static __inline__ dma_addr_t gart_get_page_from_table(struct drm_ati_pcigart_info *gart_info, > volatile u32 *pci_gart) > +{ > + dma_addr_t retval; > + switch(gart_info->gart_reg_if) { > + case DRM_ATI_GART_IGP: > + retval = (*pci_gart & ATI_PCIGART_PAGE_MASK); > + retval += (((*pci_gart & 0xf0) >> 4) << 16) << 16; > + break; > + case DRM_ATI_GART_PCIE: > + retval = (*pci_gart & ~0xc); > + retval <<= 8; > + break; > + case DRM_ATI_GART_PCI: > + retval = *pci_gart; > + break; > + } > + > + return retval; > +} > + > +int drm_ati_alloc_pcigart_table(struct drm_device *dev, > + struct drm_ati_pcigart_info *gart_info) > { > gart_info->table_handle = drm_pci_alloc(dev, gart_info->table_size, > PAGE_SIZE, > @@ -44,12 +90,25 @@ static int drm_ati_alloc_pcigart_table(struct drm_device *dev, > if (gart_info->table_handle == NULL) > return -ENOMEM; > > +#ifdef CONFIG_X86 > + /* IGPs only exist on x86 in any case */ > + if (gart_info->gart_reg_if == DRM_ATI_GART_IGP) > + set_memory_uc((unsigned long)gart_info->table_handle->vaddr, gart_info->table_size >> > PAGE_SHIFT); > +#endif > + > + memset(gart_info->table_handle->vaddr, 0, gart_info->table_size); > return 0; > } > +EXPORT_SYMBOL(drm_ati_alloc_pcigart_table); > > static void drm_ati_free_pcigart_table(struct drm_device *dev, > struct drm_ati_pcigart_info *gart_info) > { > +#ifdef CONFIG_X86 > + /* IGPs only exist on x86 in any case */ > + if (gart_info->gart_reg_if == DRM_ATI_GART_IGP) > + set_memory_wb((unsigned long)gart_info->table_handle->vaddr, gart_info->table_size >> > PAGE_SHIFT); > +#endif > drm_pci_free(dev, gart_info->table_handle); > gart_info->table_handle = NULL; > } > @@ -63,7 +122,6 @@ int drm_ati_pcigart_cleanup(struct drm_device *dev, struct drm_ati_pcigart_info > > /* we need to support large memory configurations */ > if (!entry) { > - DRM_ERROR("no scatter/gather memory!\n"); > return 0; > } > > @@ -98,17 +156,14 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga > struct drm_sg_mem *entry = dev->sg; > void *address = NULL; > unsigned long pages; > - u32 *pci_gart, page_base; > + u32 *pci_gart; > dma_addr_t bus_address = 0; > int i, j, ret = 0; > int max_pages; > + dma_addr_t entry_addr; > > - if (!entry) { > - DRM_ERROR("no scatter/gather memory!\n"); > - goto done; > - } > > - if (gart_info->gart_table_location == DRM_ATI_GART_MAIN) { > + if (gart_info->gart_table_location == DRM_ATI_GART_MAIN && gart_info->table_handle == NULL) { > DRM_DEBUG("PCI: no table in VRAM: using normal RAM\n"); > > ret = drm_ati_alloc_pcigart_table(dev, gart_info); > @@ -116,15 +171,19 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga > DRM_ERROR("cannot allocate PCI GART page!\n"); > goto done; > } > + } > > + if (gart_info->gart_table_location == DRM_ATI_GART_MAIN) { > address = gart_info->table_handle->vaddr; > bus_address = gart_info->table_handle->busaddr; > } else { > address = gart_info->addr; > bus_address = gart_info->bus_addr; > - DRM_DEBUG("PCI: Gart Table: VRAM %08LX mapped at %08lX\n", > - (unsigned long long)bus_address, > - (unsigned long)address); > + } > + > + if (!entry) { > + DRM_ERROR("no scatter/gather memory!\n"); > + goto done; > } > > pci_gart = (u32 *) address; > @@ -133,8 +192,6 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga > pages = (entry->pages <= max_pages) > ? entry->pages : max_pages; > > - memset(pci_gart, 0, max_pages * sizeof(u32)); > - > for (i = 0; i < pages; i++) { > /* we need to support large memory configurations */ > entry->busaddr[i] = pci_map_page(dev->pdev, entry->pagelist[i], > @@ -146,32 +203,18 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga > bus_address = 0; > goto done; > } > - page_base = (u32) entry->busaddr[i]; > > + entry_addr = entry->busaddr[i]; > for (j = 0; j < (PAGE_SIZE / ATI_PCIGART_PAGE_SIZE); j++) { > - switch(gart_info->gart_reg_if) { > - case DRM_ATI_GART_IGP: > - *pci_gart = cpu_to_le32((page_base) | 0xc); > - break; > - case DRM_ATI_GART_PCIE: > - *pci_gart = cpu_to_le32((page_base >> 8) | 0xc); > - break; > - default: > - case DRM_ATI_GART_PCI: > - *pci_gart = cpu_to_le32(page_base); > - break; > - } > + gart_insert_page_into_table(gart_info, entry_addr, pci_gart); > pci_gart++; > - page_base += ATI_PCIGART_PAGE_SIZE; > + entry_addr += ATI_PCIGART_PAGE_SIZE; > } > } > + > ret = 1; > > -#if defined(__i386__) || defined(__x86_64__) > - wbinvd(); > -#else > mb(); > -#endif > > done: > gart_info->addr = address; > @@ -179,3 +222,142 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga > return ret; > } > EXPORT_SYMBOL(drm_ati_pcigart_init); > + > +static int ati_pcigart_needs_unbind_cache_adjust(struct drm_ttm_backend *backend) > +{ > + return ((backend->flags & DRM_BE_FLAG_BOUND_CACHED) ? 0 : 1); > +} > + > +static int ati_pcigart_populate(struct drm_ttm_backend *backend, > + unsigned long num_pages, > + struct page **pages, > + struct page *dummy_read_page) > +{ > + struct ati_pcigart_ttm_backend *atipci_be = > + container_of(backend, struct ati_pcigart_ttm_backend, backend); > + > + atipci_be->pages = pages; > + atipci_be->num_pages = num_pages; > + atipci_be->populated = 1; > + return 0; > +} > + > +static int ati_pcigart_bind_ttm(struct drm_ttm_backend *backend, > + struct drm_bo_mem_reg *bo_mem) > +{ > + struct ati_pcigart_ttm_backend *atipci_be = > + container_of(backend, struct ati_pcigart_ttm_backend, backend); > + off_t j; > + int i; > + struct drm_ati_pcigart_info *info = atipci_be->gart_info; > + volatile u32 *pci_gart; > + dma_addr_t offset = bo_mem->mm_node->start; > + dma_addr_t page_base; > + > + pci_gart = info->addr; > + > + j = offset; > + while (j < (offset + atipci_be->num_pages)) { > + if (gart_get_page_from_table(info, pci_gart + j)) > + return -EBUSY; > + j++; > + } > + > + for (i = 0, j = offset; i < atipci_be->num_pages; i++, j++) { > + struct page *cur_page = atipci_be->pages[i]; > + /* write value */ > + page_base = page_to_phys(cur_page); > + gart_insert_page_into_table(info, page_base, pci_gart + j); > + } > + > + mb(); > + atipci_be->gart_flush_fn(atipci_be->dev); > + > + atipci_be->bound = 1; > + atipci_be->offset = offset; > + /* need to traverse table and add entries */ > + DRM_DEBUG("\n"); > + return 0; > +} > + > +static int ati_pcigart_unbind_ttm(struct drm_ttm_backend *backend) > +{ > + struct ati_pcigart_ttm_backend *atipci_be = > + container_of(backend, struct ati_pcigart_ttm_backend, backend); > + struct drm_ati_pcigart_info *info = atipci_be->gart_info; > + unsigned long offset = atipci_be->offset; > + int i; > + off_t j; > + volatile u32 *pci_gart = info->addr; > + > + if (atipci_be->bound != 1) > + return -EINVAL; > + > + for (i = 0, j = offset; i < atipci_be->num_pages; i++, j++) { > + *(pci_gart + j) = 0; > + } > + > + mb(); > + atipci_be->gart_flush_fn(atipci_be->dev); > + atipci_be->bound = 0; > + atipci_be->offset = 0; > + return 0; > +} > + > +static void ati_pcigart_clear_ttm(struct drm_ttm_backend *backend) > +{ > + struct ati_pcigart_ttm_backend *atipci_be = > + container_of(backend, struct ati_pcigart_ttm_backend, backend); > + > + DRM_DEBUG("\n"); > + if (atipci_be->pages) { > + backend->func->unbind(backend); > + atipci_be->pages = NULL; > + > + } > + atipci_be->num_pages = 0; > +} > + > +static void ati_pcigart_destroy_ttm(struct drm_ttm_backend *backend) > +{ > + struct ati_pcigart_ttm_backend *atipci_be; > + if (backend) { > + DRM_DEBUG("\n"); > + atipci_be = container_of(backend, struct ati_pcigart_ttm_backend, backend); > + if (atipci_be) { > + if (atipci_be->pages) { > + backend->func->clear(backend); > + } > + drm_ctl_free(atipci_be, sizeof(*atipci_be), DRM_MEM_TTM); > + } > + } > +} > + > +static struct drm_ttm_backend_func ati_pcigart_ttm_backend = > +{ > + .needs_ub_cache_adjust = ati_pcigart_needs_unbind_cache_adjust, > + .populate = ati_pcigart_populate, > + .clear = ati_pcigart_clear_ttm, > + .bind = ati_pcigart_bind_ttm, > + .unbind = ati_pcigart_unbind_ttm, > + .destroy = ati_pcigart_destroy_ttm, > +}; > + > +struct drm_ttm_backend *ati_pcigart_init_ttm(struct drm_device *dev, struct drm_ati_pcigart_info > *info, void (*gart_flush_fn)(struct drm_device *dev)) > +{ > + struct ati_pcigart_ttm_backend *atipci_be; > + > + atipci_be = drm_ctl_calloc(1, sizeof (*atipci_be), DRM_MEM_TTM); > + if (!atipci_be) > + return NULL; > + > + atipci_be->populated = 0; > + atipci_be->backend.func = &ati_pcigart_ttm_backend; > +// atipci_be->backend.mem_type = DRM_BO_MEM_TT; > + atipci_be->gart_info = info; > + atipci_be->gart_flush_fn = gart_flush_fn; > + atipci_be->dev = dev; > + > + return &atipci_be->backend; > +} > +EXPORT_SYMBOL(ati_pcigart_init_ttm); > diff --git a/drivers/gpu/drm/drm_agpsupport.c b/drivers/gpu/drm/drm_agpsupport.c > index 3d33b82..e048aa2 100644 > --- a/drivers/gpu/drm/drm_agpsupport.c > +++ b/drivers/gpu/drm/drm_agpsupport.c > @@ -496,6 +496,177 @@ drm_agp_bind_pages(struct drm_device *dev, > } > EXPORT_SYMBOL(drm_agp_bind_pages); > > +/* > + * AGP ttm backend interface. > + */ > + > +#ifndef AGP_USER_TYPES > +#define AGP_USER_TYPES (1 << 16) > +#define AGP_USER_MEMORY (AGP_USER_TYPES) > +#define AGP_USER_CACHED_MEMORY (AGP_USER_TYPES + 1) > +#endif > +#define AGP_REQUIRED_MAJOR 0 > +#define AGP_REQUIRED_MINOR 102 > + > +static int drm_agp_needs_unbind_cache_adjust(struct drm_ttm_backend *backend) > +{ > + return ((backend->flags & DRM_BE_FLAG_BOUND_CACHED) ? 0 : 1); > +} > + > + > +static int drm_agp_populate(struct drm_ttm_backend *backend, > + unsigned long num_pages, struct page **pages, > + struct page *dummy_read_page) > +{ > + struct drm_agp_ttm_backend *agp_be = > + container_of(backend, struct drm_agp_ttm_backend, backend); > + struct page **cur_page, **last_page = pages + num_pages; > + DRM_AGP_MEM *mem; > + int dummy_page_count = 0; > + > + if (drm_alloc_memctl(num_pages * sizeof(void *))) > + return -1; > + > + DRM_DEBUG("drm_agp_populate_ttm\n"); > + mem = drm_agp_allocate_memory(agp_be->bridge, num_pages, AGP_USER_MEMORY); > + if (!mem) { > + drm_free_memctl(num_pages * sizeof(void *)); > + return -1; > + } > + > + DRM_DEBUG("Current page count is %ld\n", (long) mem->page_count); > + mem->page_count = 0; > + for (cur_page = pages; cur_page < last_page; ++cur_page) { > + struct page *page = *cur_page; > + if (!page) { > + page = dummy_read_page; > + ++dummy_page_count; > + } > + mem->memory[mem->page_count++] = phys_to_gart(page_to_phys(page)); > + } > + if (dummy_page_count) > + DRM_DEBUG("Mapped %d dummy pages\n", dummy_page_count); > + agp_be->mem = mem; > + return 0; > +} > + > +static int drm_agp_bind_ttm(struct drm_ttm_backend *backend, > + struct drm_bo_mem_reg *bo_mem) > +{ > + struct drm_agp_ttm_backend *agp_be = > + container_of(backend, struct drm_agp_ttm_backend, backend); > + DRM_AGP_MEM *mem = agp_be->mem; > + int ret; > + int snooped = (bo_mem->flags & DRM_BO_FLAG_CACHED) && !(bo_mem->flags & > DRM_BO_FLAG_CACHED_MAPPED); > + > + DRM_DEBUG("drm_agp_bind_ttm\n"); > + mem->is_flushed = true; > + mem->type = AGP_USER_MEMORY; > + /* CACHED MAPPED implies not snooped memory */ > + if (snooped) > + mem->type = AGP_USER_CACHED_MEMORY; > + > + ret = drm_agp_bind_memory(mem, bo_mem->mm_node->start); > + if (ret) > + DRM_ERROR("AGP Bind memory failed\n"); > + > + DRM_FLAG_MASKED(backend->flags, (bo_mem->flags & DRM_BO_FLAG_CACHED) ? > + DRM_BE_FLAG_BOUND_CACHED : 0, > + DRM_BE_FLAG_BOUND_CACHED); > + return ret; > +} > + > +static int drm_agp_unbind_ttm(struct drm_ttm_backend *backend) > +{ > + struct drm_agp_ttm_backend *agp_be = > + container_of(backend, struct drm_agp_ttm_backend, backend); > + > + DRM_DEBUG("drm_agp_unbind_ttm\n"); > + if (agp_be->mem->is_bound) > + return drm_agp_unbind_memory(agp_be->mem); > + else > + return 0; > +} > + > +static void drm_agp_clear_ttm(struct drm_ttm_backend *backend) > +{ > + struct drm_agp_ttm_backend *agp_be = > + container_of(backend, struct drm_agp_ttm_backend, backend); > + DRM_AGP_MEM *mem = agp_be->mem; > + > + DRM_DEBUG("drm_agp_clear_ttm\n"); > + if (mem) { > + unsigned long num_pages = mem->page_count; > + backend->func->unbind(backend); > + agp_free_memory(mem); > + drm_free_memctl(num_pages * sizeof(void *)); > + } > + agp_be->mem = NULL; > +} > + > +static void drm_agp_destroy_ttm(struct drm_ttm_backend *backend) > +{ > + struct drm_agp_ttm_backend *agp_be; > + > + if (backend) { > + DRM_DEBUG("drm_agp_destroy_ttm\n"); > + agp_be = container_of(backend, struct drm_agp_ttm_backend, backend); > + if (agp_be) { > + if (agp_be->mem) > + backend->func->clear(backend); > + drm_ctl_free(agp_be, sizeof(*agp_be), DRM_MEM_TTM); > + } > + } > +} > + > +static struct drm_ttm_backend_func agp_ttm_backend = { > + .needs_ub_cache_adjust = drm_agp_needs_unbind_cache_adjust, > + .populate = drm_agp_populate, > + .clear = drm_agp_clear_ttm, > + .bind = drm_agp_bind_ttm, > + .unbind = drm_agp_unbind_ttm, > + .destroy = drm_agp_destroy_ttm, > +}; > + > +struct drm_ttm_backend *drm_agp_init_ttm(struct drm_device *dev) > +{ > + > + struct drm_agp_ttm_backend *agp_be; > + struct agp_kern_info *info; > + > + if (!dev->agp) { > + DRM_ERROR("AGP is not initialized.\n"); > + return NULL; > + } > + info = &dev->agp->agp_info; > + > + if (info->version.major != AGP_REQUIRED_MAJOR || > + info->version.minor < AGP_REQUIRED_MINOR) { > + DRM_ERROR("Wrong agpgart version %d.%d\n" > + "\tYou need at least version %d.%d.\n", > + info->version.major, > + info->version.minor, > + AGP_REQUIRED_MAJOR, > + AGP_REQUIRED_MINOR); > + return NULL; > + } > + > + > + agp_be = drm_ctl_calloc(1, sizeof(*agp_be), DRM_MEM_TTM); > + if (!agp_be) > + return NULL; > + > + agp_be->mem = NULL; > + > + agp_be->bridge = dev->agp->bridge; > + agp_be->populated = false; > + agp_be->backend.func = &agp_ttm_backend; > + agp_be->backend.dev = dev; > + > + return &agp_be->backend; > +} > +EXPORT_SYMBOL(drm_agp_init_ttm); > + > void drm_agp_chipset_flush(struct drm_device *dev) > { > agp_flush_chipset(dev->agp->bridge); > diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c > index a734627..ca7a9ef 100644 > --- a/drivers/gpu/drm/drm_auth.c > +++ b/drivers/gpu/drm/drm_auth.c > @@ -45,14 +45,15 @@ > * the one with matching magic number, while holding the drm_device::struct_mutex > * lock. > */ > -static struct drm_file *drm_find_file(struct drm_device * dev, drm_magic_t magic) > +static struct drm_file *drm_find_file(struct drm_master *master, drm_magic_t magic) > { > struct drm_file *retval = NULL; > struct drm_magic_entry *pt; > struct drm_hash_item *hash; > + struct drm_device *dev = master->minor->dev; > > mutex_lock(&dev->struct_mutex); > - if (!drm_ht_find_item(&dev->magiclist, (unsigned long)magic, &hash)) { > + if (!drm_ht_find_item(&master->magiclist, (unsigned long)magic, &hash)) { > pt = drm_hash_entry(hash, struct drm_magic_entry, hash_item); > retval = pt->priv; > } > @@ -71,11 +72,11 @@ static struct drm_file *drm_find_file(struct drm_device * dev, drm_magic_t magic > * associated the magic number hash key in drm_device::magiclist, while holding > * the drm_device::struct_mutex lock. > */ > -static int drm_add_magic(struct drm_device * dev, struct drm_file * priv, > +static int drm_add_magic(struct drm_master *master, struct drm_file *priv, > drm_magic_t magic) > { > struct drm_magic_entry *entry; > - > + struct drm_device *dev = master->minor->dev; > DRM_DEBUG("%d\n", magic); > > entry = drm_alloc(sizeof(*entry), DRM_MEM_MAGIC); > @@ -83,11 +84,10 @@ static int drm_add_magic(struct drm_device * dev, struct drm_file * priv, > return -ENOMEM; > memset(entry, 0, sizeof(*entry)); > entry->priv = priv; > - > entry->hash_item.key = (unsigned long)magic; > mutex_lock(&dev->struct_mutex); > - drm_ht_insert_item(&dev->magiclist, &entry->hash_item); > - list_add_tail(&entry->head, &dev->magicfree); > + drm_ht_insert_item(&master->magiclist, &entry->hash_item); > + list_add_tail(&entry->head, &master->magicfree); > mutex_unlock(&dev->struct_mutex); > > return 0; > @@ -102,20 +102,21 @@ static int drm_add_magic(struct drm_device * dev, struct drm_file * priv, > * Searches and unlinks the entry in drm_device::magiclist with the magic > * number hash key, while holding the drm_device::struct_mutex lock. > */ > -static int drm_remove_magic(struct drm_device * dev, drm_magic_t magic) > +static int drm_remove_magic(struct drm_master *master, drm_magic_t magic) > { > struct drm_magic_entry *pt; > struct drm_hash_item *hash; > + struct drm_device *dev = master->minor->dev; > > DRM_DEBUG("%d\n", magic); > > mutex_lock(&dev->struct_mutex); > - if (drm_ht_find_item(&dev->magiclist, (unsigned long)magic, &hash)) { > + if (drm_ht_find_item(&master->magiclist, (unsigned long)magic, &hash)) { > mutex_unlock(&dev->struct_mutex); > return -EINVAL; > } > pt = drm_hash_entry(hash, struct drm_magic_entry, hash_item); > - drm_ht_remove_item(&dev->magiclist, hash); > + drm_ht_remove_item(&master->magiclist, hash); > list_del(&pt->head); > mutex_unlock(&dev->struct_mutex); > > @@ -153,9 +154,9 @@ int drm_getmagic(struct drm_device *dev, void *data, struct drm_file *file_priv) > ++sequence; /* reserve 0 */ > auth->magic = sequence++; > spin_unlock(&lock); > - } while (drm_find_file(dev, auth->magic)); > + } while (drm_find_file(file_priv->master, auth->magic)); > file_priv->magic = auth->magic; > - drm_add_magic(dev, file_priv, auth->magic); > + drm_add_magic(file_priv->master, file_priv, auth->magic); > } > > DRM_DEBUG("%u\n", auth->magic); > @@ -181,9 +182,9 @@ int drm_authmagic(struct drm_device *dev, void *data, > struct drm_file *file; > > DRM_DEBUG("%u\n", auth->magic); > - if ((file = drm_find_file(dev, auth->magic))) { > + if ((file = drm_find_file(file_priv->master, auth->magic))) { > file->authenticated = 1; > - drm_remove_magic(dev, auth->magic); > + drm_remove_magic(file_priv->master, auth->magic); > return 0; > } > return -EINVAL; > diff --git a/drivers/gpu/drm/drm_bo.c b/drivers/gpu/drm/drm_bo.c > new file mode 100644 > index 0000000..5cec5a0 > --- /dev/null > +++ b/drivers/gpu/drm/drm_bo.c > @@ -0,0 +1,2116 @@ > +/************************************************************************** > + * > + * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA > + * All Rights Reserved. > + * > + * Permission is hereby granted, free of charge, to any person obtaining a > + * copy of this software and associated documentation files (the > + * "Software"), to deal in the Software without restriction, including > + * without limitation the rights to use, copy, modify, merge, publish, > + * distribute, sub license, and/or sell copies of the Software, and to > + * permit persons to whom the Software is furnished to do so, subject to > + * the following conditions: > + * > + * The above copyright notice and this permission notice (including the > + * next paragraph) shall be included in all copies or substantial portions > + * of the Software. > + * > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, > + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL > + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, > + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR > + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE > + * USE OR OTHER DEALINGS IN THE SOFTWARE. > + * > + **************************************************************************/ > +/* > + * Authors: Thomas Hellstr�m <thomas-at-tungstengraphics-dot-com> > + */ > + > +#include "drmP.h" > + > +/* > + * Locking may look a bit complicated but isn't really: > + * > + * The buffer usage atomic_t needs to be protected by dev->struct_mutex > + * when there is a chance that it can be zero before or after the operation. > + * > + * dev->struct_mutex also protects all lists and list heads, > + * Hash tables and hash heads. > + * > + * bo->mutex protects the buffer object itself excluding the usage field. > + * bo->mutex does also protect the buffer list heads, so to manipulate those, > + * we need both the bo->mutex and the dev->struct_mutex. > + * > + * Locking order is bo->mutex, dev->struct_mutex. Therefore list traversal > + * is a bit complicated. When dev->struct_mutex is released to grab bo->mutex, > + * the list traversal will, in general, need to be restarted. > + * > + */ > + > +static void drm_bo_destroy_locked(struct drm_buffer_object *bo); > +static int drm_bo_setup_vm_locked(struct drm_buffer_object *bo); > +static void drm_bo_unmap_virtual(struct drm_buffer_object *bo); > + > +static inline uint64_t drm_bo_type_flags(unsigned type) > +{ > + return (1ULL << (24 + type)); > +} > + > +/* > + * bo locked. dev->struct_mutex locked. > + */ > + > +void drm_bo_add_to_pinned_lru(struct drm_buffer_object *bo) > +{ > + struct drm_mem_type_manager *man; > + > + DRM_ASSERT_LOCKED(&bo->dev->struct_mutex); > + DRM_ASSERT_LOCKED(&bo->mutex); > + > + man = &bo->dev->bm.man[bo->pinned_mem_type]; > + list_add_tail(&bo->pinned_lru, &man->pinned); > +} > + > +void drm_bo_add_to_lru(struct drm_buffer_object *bo) > +{ > + struct drm_mem_type_manager *man; > + > + DRM_ASSERT_LOCKED(&bo->dev->struct_mutex); > + > + if (!(bo->mem.proposed_flags & (DRM_BO_FLAG_NO_MOVE | DRM_BO_FLAG_NO_EVICT)) > + || bo->mem.mem_type != bo->pinned_mem_type) { > + man = &bo->dev->bm.man[bo->mem.mem_type]; > + list_add_tail(&bo->lru, &man->lru); > + } else { > + INIT_LIST_HEAD(&bo->lru); > + } > +} > + > +static int drm_bo_vm_pre_move(struct drm_buffer_object *bo, int old_is_pci) > +{ > +#ifdef DRM_ODD_MM_COMPAT > + int ret; > + > + if (!bo->map_list.map) > + return 0; > + > + ret = drm_bo_lock_kmm(bo); > + if (ret) > + return ret; > + drm_bo_unmap_virtual(bo); > + if (old_is_pci) > + drm_bo_finish_unmap(bo); > +#else > + if (!bo->map_list.map) > + return 0; > + > + drm_bo_unmap_virtual(bo); > +#endif > + return 0; > +} > + > +static void drm_bo_vm_post_move(struct drm_buffer_object *bo) > +{ > +#ifdef DRM_ODD_MM_COMPAT > + int ret; > + > + if (!bo->map_list.map) > + return; > + > + ret = drm_bo_remap_bound(bo); > + if (ret) { > + DRM_ERROR("Failed to remap a bound buffer object.\n" > + "\tThis might cause a sigbus later.\n"); > + } > + drm_bo_unlock_kmm(bo); > +#endif > +} > + > +/* > + * Call bo->mutex locked. > + */ > + > +int drm_bo_add_ttm(struct drm_buffer_object *bo) > +{ > + struct drm_device *dev = bo->dev; > + int ret = 0; > + uint32_t page_flags = 0; > + > + DRM_ASSERT_LOCKED(&bo->mutex); > + bo->ttm = NULL; > + > + if (bo->mem.proposed_flags & DRM_BO_FLAG_WRITE) > + page_flags |= DRM_TTM_PAGE_WRITE; > + > + switch (bo->type) { > + case drm_bo_type_device: > + case drm_bo_type_kernel: > + bo->ttm = drm_ttm_create(dev, bo->num_pages << PAGE_SHIFT, > + page_flags, dev->bm.dummy_read_page); > + if (!bo->ttm) > + ret = -ENOMEM; > + break; > + case drm_bo_type_user: > + bo->ttm = drm_ttm_create(dev, bo->num_pages << PAGE_SHIFT, > + page_flags | DRM_TTM_PAGE_USER, > + dev->bm.dummy_read_page); > + if (!bo->ttm) > + ret = -ENOMEM; > + > + ret = drm_ttm_set_user(bo->ttm, current, > + bo->buffer_start, > + bo->num_pages); > + if (ret) > + return ret; > + > + break; > + default: > + DRM_ERROR("Illegal buffer object type\n"); > + ret = -EINVAL; > + break; > + } > + > + return ret; > +} > +EXPORT_SYMBOL(drm_bo_add_ttm); > + > +static int drm_bo_handle_move_mem(struct drm_buffer_object *bo, > + struct drm_bo_mem_reg *mem, > + int evict, int no_wait) > +{ > + struct drm_device *dev = bo->dev; > + struct drm_buffer_manager *bm = &dev->bm; > + int old_is_pci = drm_mem_reg_is_pci(dev, &bo->mem); > + int new_is_pci = drm_mem_reg_is_pci(dev, mem); > + struct drm_mem_type_manager *old_man = &bm->man[bo->mem.mem_type]; > + struct drm_mem_type_manager *new_man = &bm->man[mem->mem_type]; > + int ret = 0; > + > + if (old_is_pci || new_is_pci || > + ((mem->flags ^ bo->mem.flags) & DRM_BO_FLAG_CACHED)) > + ret = drm_bo_vm_pre_move(bo, old_is_pci); > + if (ret) > + return ret; > + > + /* > + * Create and bind a ttm if required. > + */ > + > + if (!(new_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && (bo->ttm == NULL)) { > + ret = drm_bo_add_ttm(bo); > + if (ret) > + goto out_err; > + > + if (mem->mem_type != DRM_BO_MEM_LOCAL) { > + ret = drm_ttm_bind(bo->ttm, mem); > + if (ret) > + goto out_err; > + } > + > + if (bo->mem.mem_type == DRM_BO_MEM_LOCAL) { > + > + struct drm_bo_mem_reg *old_mem = &bo->mem; > + uint64_t save_flags = old_mem->flags; > + uint64_t save_proposed_flags = old_mem->proposed_flags; > + > + *old_mem = *mem; > + mem->mm_node = NULL; > + old_mem->proposed_flags = save_proposed_flags; > + DRM_FLAG_MASKED(save_flags, mem->flags, > + DRM_BO_MASK_MEMTYPE); > + goto moved; > + } > + > + } > + > + if (!(old_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && > + !(new_man->flags & _DRM_FLAG_MEMTYPE_FIXED)) > + ret = drm_bo_move_ttm(bo, evict, no_wait, mem); > + else if (dev->driver->bo_driver->move) > + ret = dev->driver->bo_driver->move(bo, evict, no_wait, mem); > + else > + ret = drm_bo_move_memcpy(bo, evict, no_wait, mem); > + > + if (ret) > + goto out_err; > + > +moved: > + if (old_is_pci || new_is_pci) > + drm_bo_vm_post_move(bo); > + > + if (bo->priv_flags & _DRM_BO_FLAG_EVICTED) { > + ret = > + dev->driver->bo_driver->invalidate_caches(dev, > + bo->mem.flags); > + if (ret) > + DRM_ERROR("Can not flush read caches\n"); > + } > + > + DRM_FLAG_MASKED(bo->priv_flags, > + (evict) ? _DRM_BO_FLAG_EVICTED : 0, > + _DRM_BO_FLAG_EVICTED); > + > + if (bo->mem.mm_node) > + bo->offset = (bo->mem.mm_node->start << PAGE_SHIFT) + > + bm->man[bo->mem.mem_type].gpu_offset; > + > + > + return 0; > + > +out_err: > + if (old_is_pci || new_is_pci) > + drm_bo_vm_post_move(bo); > + > + new_man = &bm->man[bo->mem.mem_type]; > + if ((new_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && bo->ttm) { > + drm_ttm_unbind(bo->ttm); > + drm_ttm_destroy(bo->ttm); > + bo->ttm = NULL; > + } > + > + return ret; > +} > + > +/* > + * Call bo->mutex locked. > + * Returns -EBUSY if the buffer is currently rendered to or from. 0 otherwise. > + */ > + > +static int drm_bo_busy(struct drm_buffer_object *bo, int check_unfenced) > +{ > + struct drm_fence_object *fence = bo->fence; > + > + if (check_unfenced && (bo->priv_flags & _DRM_BO_FLAG_UNFENCED)) > + return -EBUSY; > + > + if (fence) { > + if (drm_fence_object_signaled(fence, bo->fence_type)) { > + drm_fence_usage_deref_unlocked(&bo->fence); > + return 0; > + } > + drm_fence_object_flush(fence, DRM_FENCE_TYPE_EXE); > + if (drm_fence_object_signaled(fence, bo->fence_type)) { > + drm_fence_usage_deref_unlocked(&bo->fence); > + return 0; > + } > + return -EBUSY; > + } > + return 0; > +} > + > +static int drm_bo_check_unfenced(struct drm_buffer_object *bo) > +{ > + int ret; > + > + mutex_lock(&bo->mutex); > + ret = (bo->priv_flags & _DRM_BO_FLAG_UNFENCED); > + mutex_unlock(&bo->mutex); > + return ret; > +} > + > + > +/* > + * Call bo->mutex locked. > + * Wait until the buffer is idle. > + */ > + > +int drm_bo_wait(struct drm_buffer_object *bo, int lazy, int interruptible, > + int no_wait, int check_unfenced) > +{ > + int ret; > + > + DRM_ASSERT_LOCKED(&bo->mutex); > + while(unlikely(drm_bo_busy(bo, check_unfenced))) { > + if (no_wait) > + return -EBUSY; > + > + if (check_unfenced && (bo->priv_flags & _DRM_BO_FLAG_UNFENCED)) { > + mutex_unlock(&bo->mutex); > + wait_event(bo->event_queue, !drm_bo_check_unfenced(bo)); > + mutex_lock(&bo->mutex); > + bo->priv_flags |= _DRM_BO_FLAG_UNLOCKED; > + } > + > + if (bo->fence) { > + struct drm_fence_object *fence; > + uint32_t fence_type = bo->fence_type; > + > + drm_fence_reference_unlocked(&fence, bo->fence); > + mutex_unlock(&bo->mutex); > + > + ret = drm_fence_object_wait(fence, lazy, !interruptible, > + fence_type); > + > + drm_fence_usage_deref_unlocked(&fence); > + mutex_lock(&bo->mutex); > + bo->priv_flags |= _DRM_BO_FLAG_UNLOCKED; > + if (ret) > + return ret; > + } > + > + } > + return 0; > +} > +EXPORT_SYMBOL(drm_bo_wait); > + > +static int drm_bo_expire_fence(struct drm_buffer_object *bo, int allow_errors) > +{ > + struct drm_device *dev = bo->dev; > + struct drm_buffer_manager *bm = &dev->bm; > + > + if (bo->fence) { > + if (bm->nice_mode) { > + unsigned long _end = jiffies + 3 * DRM_HZ; > + int ret; > + do { > + ret = drm_bo_wait(bo, 0, 0, 0, 0); > + if (ret && allow_errors) > + return ret; > + > + } while (ret && !time_after_eq(jiffies, _end)); > + > + if (bo->fence) { > + bm->nice_mode = 0; > + DRM_ERROR("Detected GPU lockup or " > + "fence driver was taken down. " > + "Evicting buffer.\n"); > + } > + } > + if (bo->fence) > + drm_fence_usage_deref_unlocked(&bo->fence); > + } > + return 0; > +} > + > +/* > + * Call dev->struct_mutex locked. > + * Attempts to remove all private references to a buffer by expiring its > + * fence object and removing from lru lists and memory managers. > + */ > + > +static void drm_bo_cleanup_refs(struct drm_buffer_object *bo, int remove_all) > +{ > + struct drm_device *dev = bo->dev; > + struct drm_buffer_manager *bm = &dev->bm; > + > + DRM_ASSERT_LOCKED(&dev->struct_mutex); > + > + atomic_inc(&bo->usage); > + mutex_unlock(&dev->struct_mutex); > + mutex_lock(&bo->mutex); > + > + DRM_FLAG_MASKED(bo->priv_flags, 0, _DRM_BO_FLAG_UNFENCED); > + > + if (bo->fence && drm_fence_object_signaled(bo->fence, > + bo->fence_type)) > + drm_fence_usage_deref_unlocked(&bo->fence); > + > + if (bo->fence && remove_all) > + (void)drm_bo_expire_fence(bo, 0); > + > + mutex_lock(&dev->struct_mutex); > + > + if (!atomic_dec_and_test(&bo->usage)) > + goto out; > + > + if (!bo->fence) { > + list_del_init(&bo->lru); > + if (bo->mem.mm_node) { > + drm_mm_put_block(bo->mem.mm_node); > + if (bo->pinned_node == bo->mem.mm_node) > + bo->pinned_node = NULL; > + bo->mem.mm_node = NULL; > + } > + list_del_init(&bo->pinned_lru); > + if (bo->pinned_node) { > + drm_mm_put_block(bo->pinned_node); > + bo->pinned_node = NULL; > + } > + list_del_init(&bo->ddestroy); > + mutex_unlock(&bo->mutex); > + drm_bo_destroy_locked(bo); > + return; > + } > + > + if (list_empty(&bo->ddestroy)) { > + drm_fence_object_flush(bo->fence, bo->fence_type); > + list_add_tail(&bo->ddestroy, &bm->ddestroy); > + schedule_delayed_work(&bm->wq, > + ((DRM_HZ / 100) < 1) ? 1 : DRM_HZ / 100); > + } > + > +out: > + mutex_unlock(&bo->mutex); > + return; > +} > + > +/* > + * Verify that refcount is 0 and that there are no internal references > + * to the buffer object. Then destroy it. > + */ > + > +static void drm_bo_destroy_locked(struct drm_buffer_object *bo) > +{ > + struct drm_device *dev = bo->dev; > + struct drm_buffer_manager *bm = &dev->bm; > + > + DRM_ASSERT_LOCKED(&dev->struct_mutex); > + > + DRM_DEBUG("freeing %p\n", bo); > + if (list_empty(&bo->lru) && bo->mem.mm_node == NULL && > + list_empty(&bo->pinned_lru) && bo->pinned_node == NULL && > + list_empty(&bo->ddestroy) && atomic_read(&bo->usage) == 0) { > + if (bo->fence != NULL) { > + DRM_ERROR("Fence was non-zero.\n"); > + drm_bo_cleanup_refs(bo, 0); > + return; > + } > + > +#ifdef DRM_ODD_MM_COMPAT > + BUG_ON(!list_empty(&bo->vma_list)); > + BUG_ON(!list_empty(&bo->p_mm_list)); > +#endif > + > + if (bo->ttm) { > + drm_ttm_unbind(bo->ttm); > + drm_ttm_destroy(bo->ttm); > + bo->ttm = NULL; > + } > + > + atomic_dec(&bm->count); > + > + drm_ctl_free(bo, sizeof(*bo), DRM_MEM_BUFOBJ); > + > + return; > + } > + > + /* > + * Some stuff is still trying to reference the buffer object. > + * Get rid of those references. > + */ > + > + drm_bo_cleanup_refs(bo, 0); > + > + return; > +} > + > +/* > + * Call dev->struct_mutex locked. > + */ > + > +static void drm_bo_delayed_delete(struct drm_device *dev, int remove_all) > +{ > + struct drm_buffer_manager *bm = &dev->bm; > + > + struct drm_buffer_object *entry, *nentry; > + struct list_head *list, *next; > + > + list_for_each_safe(list, next, &bm->ddestroy) { > + entry = list_entry(list, struct drm_buffer_object, ddestroy); > + > + nentry = NULL; > + DRM_DEBUG("bo is %p, %d\n", entry, entry->num_pages); > + if (next != &bm->ddestroy) { > + nentry = list_entry(next, struct drm_buffer_object, > + ddestroy); > + atomic_inc(&nentry->usage); > + } > + > + drm_bo_cleanup_refs(entry, remove_all); > + > + if (nentry) > + atomic_dec(&nentry->usage); > + } > +} > + > +static void drm_bo_delayed_workqueue(struct work_struct *work) > +{ > + struct drm_buffer_manager *bm = > + container_of(work, struct drm_buffer_manager, wq.work); > + struct drm_device *dev = container_of(bm, struct drm_device, bm); > + > + DRM_DEBUG("Delayed delete Worker\n"); > + > + mutex_lock(&dev->struct_mutex); > + if (!bm->initialized) { > + mutex_unlock(&dev->struct_mutex); > + return; > + } > + drm_bo_delayed_delete(dev, 0); > + if (bm->initialized && !list_empty(&bm->ddestroy)) { > + schedule_delayed_work(&bm->wq, > + ((DRM_HZ / 100) < 1) ? 1 : DRM_HZ / 100); > + } > + mutex_unlock(&dev->struct_mutex); > +} > + > +void drm_bo_usage_deref_locked(struct drm_buffer_object **bo) > +{ > + struct drm_buffer_object *tmp_bo = *bo; > + bo = NULL; > + > + DRM_ASSERT_LOCKED(&tmp_bo->dev->struct_mutex); > + > + if (atomic_dec_and_test(&tmp_bo->usage)) > + drm_bo_destroy_locked(tmp_bo); > +} > +EXPORT_SYMBOL(drm_bo_usage_deref_locked); > + > +void drm_bo_usage_deref_unlocked(struct drm_buffer_object **bo) > +{ > + struct drm_buffer_object *tmp_bo = *bo; > + struct drm_device *dev = tmp_bo->dev; > + > + *bo = NULL; > + if (atomic_dec_and_test(&tmp_bo->usage)) { > + mutex_lock(&dev->struct_mutex); > + if (atomic_read(&tmp_bo->usage) == 0) > + drm_bo_destroy_locked(tmp_bo); > + mutex_unlock(&dev->struct_mutex); > + } > +} > +EXPORT_SYMBOL(drm_bo_usage_deref_unlocked); > + > +void drm_putback_buffer_objects(struct drm_device *dev) > +{ > + struct drm_buffer_manager *bm = &dev->bm; > + struct list_head *list = &bm->unfenced; > + struct drm_buffer_object *entry, *next; > + > + mutex_lock(&dev->struct_mutex); > + list_for_each_entry_safe(entry, next, list, lru) { > + atomic_inc(&entry->usage); > + mutex_unlock(&dev->struct_mutex); > + > + mutex_lock(&entry->mutex); > + BUG_ON(!(entry->priv_flags & _DRM_BO_FLAG_UNFENCED)); > + mutex_lock(&dev->struct_mutex); > + > + list_del_init(&entry->lru); > + DRM_FLAG_MASKED(entry->priv_flags, 0, _DRM_BO_FLAG_UNFENCED); > + wake_up_all(&entry->event_queue); > + > + /* > + * FIXME: Might want to put back on head of list > + * instead of tail here. > + */ > + > + drm_bo_add_to_lru(entry); > + mutex_unlock(&entry->mutex); > + drm_bo_usage_deref_locked(&entry); > + } > + mutex_unlock(&dev->struct_mutex); > +} > +EXPORT_SYMBOL(drm_putback_buffer_objects); > + > +/* > + * Note. The caller has to register (if applicable) > + * and deregister fence object usage. > + */ > + > +int drm_fence_buffer_objects(struct drm_device *dev, > + struct list_head *list, > + uint32_t fence_flags, > + struct drm_fence_object *fence, > + struct drm_fence_object **used_fence) > +{ > + struct drm_buffer_manager *bm = &dev->bm; > + struct drm_buffer_object *entry; > + uint32_t fence_type = 0; > + uint32_t fence_class = ~0; > + int count = 0; > + int ret = 0; > + struct list_head *l; > + > + mutex_lock(&dev->struct_mutex); > + > + if (!list) > + list = &bm->unfenced; > + > + if (fence) > + fence_class = fence->fence_class; > + > + list_for_each_entry(entry, list, lru) { > + BUG_ON(!(entry->priv_flags & _DRM_BO_FLAG_UNFENCED)); > + fence_type |= entry->new_fence_type; > + if (fence_class == ~0) > + fence_class = entry->new_fence_class; > + else if (entry->new_fence_class != fence_class) { > + DRM_ERROR("Unmatching fence classes on unfenced list: " > + "%d and %d.\n", > + fence_class, > + entry->new_fence_class); > + ret = -EINVAL; > + goto out; > + } > + count++; > + } > + > + if (!count) { > + ret = -EINVAL; > + goto out; > + } > + > + if (fence) { > + if ((fence_type & fence->type) != fence_type || > + (fence->fence_class != fence_class)) { > + DRM_ERROR("Given fence doesn't match buffers " > + "on unfenced list.\n"); > + ret = -EINVAL; > + goto out; > + } > + } else { > + mutex_unlock(&dev->struct_mutex); > + ret = drm_fence_object_create(dev, fence_class, fence_type, > + fence_flags | DRM_FENCE_FLAG_EMIT, > + &fence); > + mutex_lock(&dev->struct_mutex); > + if (ret) > + goto out; > + } > + > + count = 0; > + l = list->next; > + while (l != list) { > + prefetch(l->next); > + entry = list_entry(l, struct drm_buffer_object, lru); > + atomic_inc(&entry->usage); > + mutex_unlock(&dev->struct_mutex); > + mutex_lock(&entry->mutex); > + mutex_lock(&dev->struct_mutex); > + list_del_init(l); > + if (entry->priv_flags & _DRM_BO_FLAG_UNFENCED) { > + count++; > + if (entry->fence) > + drm_fence_usage_deref_locked(&entry->fence); > + entry->fence = drm_fence_reference_locked(fence); > + entry->fence_class = entry->new_fence_class; > + entry->fence_type = entry->new_fence_type; > + DRM_FLAG_MASKED(entry->priv_flags, 0, > + _DRM_BO_FLAG_UNFENCED); > + wake_up_all(&entry->event_queue); > + drm_bo_add_to_lru(entry); > + } > + mutex_unlock(&entry->mutex); > + drm_bo_usage_deref_locked(&entry); > + l = list->next; > + } > + DRM_DEBUG("Fenced %d buffers\n", count); > +out: > + mutex_unlock(&dev->struct_mutex); > + *used_fence = fence; > + return ret; > +} > +EXPORT_SYMBOL(drm_fence_buffer_objects); > + > +/* > + * bo->mutex locked > + */ > + > +static int drm_bo_evict(struct drm_buffer_object *bo, unsigned mem_type, > + int no_wait) > +{ > + int ret = 0; > + struct drm_device *dev = bo->dev; > + struct drm_bo_mem_reg evict_mem; > + > + /* > + * Someone might have modified the buffer before we took the > + * buffer mutex. > + */ > + > + do { > + bo->priv_flags &= ~_DRM_BO_FLAG_UNLOCKED; > + > + if (unlikely(bo->mem.flags & > + (DRM_BO_FLAG_NO_MOVE | DRM_BO_FLAG_NO_EVICT))) > + goto out_unlock; > + if (unlikely(bo->priv_flags & _DRM_BO_FLAG_UNFENCED)) > + goto out_unlock; > + if (unlikely(bo->mem.mem_type != mem_type)) > + goto out_unlock; > + ret = drm_bo_wait(bo, 0, 1, no_wait, 0); > + if (ret) > + goto out_unlock; > + > + } while(bo->priv_flags & _DRM_BO_FLAG_UNLOCKED); > + > + evict_mem = bo->mem; > + evict_mem.mm_node = NULL; > + > + evict_mem = bo->mem; > + evict_mem.proposed_flags = dev->driver->bo_driver->evict_flags(bo); > + > + mutex_lock(&dev->struct_mutex); > + list_del_init(&bo->lru); > + mutex_unlock(&dev->struct_mutex); > + > + ret = drm_bo_mem_space(bo, &evict_mem, no_wait); > + > + if (ret) { > + if (ret != -EAGAIN) > + DRM_ERROR("Failed to find memory space for " > + "buffer 0x%p eviction.\n", bo); > + goto out; > + } > + > + ret = drm_bo_handle_move_mem(bo, &evict_mem, 1, no_wait); > + > + if (ret) { > + if (ret != -EAGAIN) > + DRM_ERROR("Buffer eviction failed\n"); > + goto out; > + } > + > + DRM_FLAG_MASKED(bo->priv_flags, _DRM_BO_FLAG_EVICTED, > + _DRM_BO_FLAG_EVICTED); > + > +out: > + mutex_lock(&dev->struct_mutex); > + if (evict_mem.mm_node) { > + if (evict_mem.mm_node != bo->pinned_node) > + drm_mm_put_block(evict_mem.mm_node); > + evict_mem.mm_node = NULL; > + } > + drm_bo_add_to_lru(bo); > + BUG_ON(bo->priv_flags & _DRM_BO_FLAG_UNLOCKED); > +out_unlock: > + mutex_unlock(&dev->struct_mutex); > + > + return ret; > +} > + > +/** > + * Repeatedly evict memory from the LRU for @mem_type until we create enough > + * space, or we've evicted everything and there isn't enough space. > + */ > +static int drm_bo_mem_force_space(struct drm_device *dev, > + struct drm_bo_mem_reg *mem, > + uint32_t mem_type, int no_wait) > +{ > + struct drm_mm_node *node; > + struct drm_buffer_manager *bm = &dev->bm; > + struct drm_buffer_object *entry; > + struct drm_mem_type_manager *man = &bm->man[mem_type]; > + struct list_head *lru; > + unsigned long num_pages = mem->num_pages; > + int ret; > + > + mutex_lock(&dev->struct_mutex); > + do { > + node = drm_mm_search_free(&man->manager, num_pages, > + mem->page_alignment, 1); > + if (node) > + break; > + > + lru = &man->lru; > + if (lru->next == lru) > + break; > + > + entry = list_entry(lru->next, struct drm_buffer_object, lru); > + atomic_inc(&entry->usage); > + mutex_unlock(&dev->struct_mutex); > + mutex_lock(&entry->mutex); > + ret = drm_bo_evict(entry, mem_type, no_wait); > + mutex_unlock(&entry->mutex); > + drm_bo_usage_deref_unlocked(&entry); > + if (ret) > + return ret; > + mutex_lock(&dev->struct_mutex); > + } while (1); > + > + if (!node) { > + mutex_unlock(&dev->struct_mutex); > + return -ENOMEM; > + } > + > + node = drm_mm_get_block(node, num_pages, mem->page_alignment); > + if (unlikely(!node)) { > + mutex_unlock(&dev->struct_mutex); > + return -ENOMEM; > + } > + > + mutex_unlock(&dev->struct_mutex); > + mem->mm_node = node; > + mem->mem_type = mem_type; > + return 0; > +} > + > +static int drm_bo_mt_compatible(struct drm_mem_type_manager *man, > + int disallow_fixed, > + uint32_t mem_type, > + uint64_t mask, uint32_t *res_mask) > +{ > + uint64_t cur_flags = drm_bo_type_flags(mem_type); > + uint64_t flag_diff; > + > + if ((man->flags & _DRM_FLAG_MEMTYPE_FIXED) && disallow_fixed) > + return 0; > + if (man->flags & _DRM_FLAG_MEMTYPE_CACHED) > + cur_flags |= DRM_BO_FLAG_CACHED; > + if (man->flags & _DRM_FLAG_MEMTYPE_MAPPABLE) > + cur_flags |= DRM_BO_FLAG_MAPPABLE; > + if (man->flags & _DRM_FLAG_MEMTYPE_CSELECT) > + DRM_FLAG_MASKED(cur_flags, mask, DRM_BO_FLAG_CACHED); > + > + if ((cur_flags & mask & DRM_BO_MASK_MEM) == 0) > + return 0; > + > + if (mem_type == DRM_BO_MEM_LOCAL) { > + *res_mask = cur_flags; > + return 1; > + } > + > + flag_diff = (mask ^ cur_flags); > + if (flag_diff & DRM_BO_FLAG_CACHED_MAPPED) > + cur_flags |= DRM_BO_FLAG_CACHED_MAPPED; > + > + if ((flag_diff & DRM_BO_FLAG_CACHED) && > + (!(mask & DRM_BO_FLAG_CACHED) || > + (mask & DRM_BO_FLAG_FORCE_CACHING))) > + return 0; > + > + if ((flag_diff & DRM_BO_FLAG_MAPPABLE) && > + ((mask & DRM_BO_FLAG_MAPPABLE) || > + (mask & DRM_BO_FLAG_FORCE_MAPPABLE))) > + return 0; > + > + *res_mask = cur_flags; > + return 1; > +} > + > +/** > + * Creates space for memory region @mem according to its type. > + * > + * This function first searches for free space in compatible memory types in > + * the priority order defined by the driver. If free space isn't found, then > + * drm_bo_mem_force_space is attempted in priority order to evict and find > + * space. > + */ > +int drm_bo_mem_space(struct drm_buffer_object *bo, > + struct drm_bo_mem_reg *mem, int no_wait) > +{ > + struct drm_device *dev = bo->dev; > + struct drm_buffer_manager *bm = &dev->bm; > + struct drm_mem_type_manager *man; > + > + uint32_t num_prios = dev->driver->bo_driver->num_mem_type_prio; > + const uint32_t *prios = dev->driver->bo_driver->mem_type_prio; > + uint32_t i; > + uint32_t mem_type = DRM_BO_MEM_LOCAL; > + uint32_t cur_flags; > + int type_found = 0; > + int type_ok = 0; > + int has_eagain = 0; > + struct drm_mm_node *node = NULL; > + int ret; > + > + mem->mm_node = NULL; > + for (i = 0; i < num_prios; ++i) { > + mem_type = prios[i]; > + man = &bm->man[mem_type]; > + > + type_ok = drm_bo_mt_compatible(man, > + bo->type == drm_bo_type_user, > + mem_type, mem->proposed_flags, > + &cur_flags); > + > + if (!type_ok) > + continue; > + > + if (mem_type == DRM_BO_MEM_LOCAL) > + break; > + > + if ((mem_type == bo->pinned_mem_type) && > + (bo->pinned_node != NULL)) { > + node = bo->pinned_node; > + break; > + } > + > + mutex_lock(&dev->struct_mutex); > + if (man->has_type && man->use_type) { > + type_found = 1; > + node = drm_mm_search_free(&man->manager, mem->num_pages, > + mem->page_alignment, 1); > + if (node) > + node = drm_mm_get_block(node, mem->num_pages, > + mem->page_alignment); > + } > + mutex_unlock(&dev->struct_mutex); > + if (node) > + break; > + } > + > + if ((type_ok && (mem_type == DRM_BO_MEM_LOCAL)) || node) { > + mem->mm_node = node; > + mem->mem_type = mem_type; > + mem->flags = cur_flags; > + return 0; > + } > + > + if (!typ... [truncated message content] |
From: Daniel S. <da...@fo...> - 2008-10-31 10:52:19
|
On Fri, Oct 31, 2008 at 12:33:45AM +0100, Maarten Maathuis wrote: > On Thu, Oct 30, 2008 at 10:08 PM, Jesse Barnes <jb...@vi...> wrote: > > This commit adds the core mode setting routines for use by DRM drivers to > > manage outputs and displays. Originally based on the X.Org Randr 1.2 > > implementation, the code has since been heavily changed by Dave Airlie > > with contributions by Jesse Barnes, Jakob Bornecrantz and others. > > > > This one should probably be split up a bit; I think the TTM stuff in > > particular could be factored out fairly easily. > > > > diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c > > index 738fd0f..31ce044 100644 > > --- a/arch/x86/mm/pat.c > > +++ b/arch/x86/mm/pat.c > > @@ -11,6 +11,7 @@ > > #include <linux/bootmem.h> > > #include <linux/debugfs.h> > > #include <linux/kernel.h> > > +#include <linux/module.h> > > #include <linux/gfp.h> > > #include <linux/mm.h> > > #include <linux/fs.h> > > @@ -29,6 +30,7 @@ > > > > #ifdef CONFIG_X86_PAT > > int __read_mostly pat_enabled = 1; > > +EXPORT_SYMBOL_GPL(pat_enabled); > > > > void __cpuinit pat_disable(char *reason) > > { > > diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig > > index a8b33c2..6723182 100644 > > --- a/drivers/gpu/drm/Kconfig > > +++ b/drivers/gpu/drm/Kconfig > > @@ -41,6 +41,14 @@ config DRM_RADEON > > > > If M is selected, the module will be called radeon. > > > > +config DRM_RADEON_KMS > > + bool "Enable modesetting on radeon by default" > > + depends on DRM_RADEON > > + help > > + Choose this option if you want kernel modesetting enabled by default, > > + and you have a new enough userspace to support this. Running old > > + userspaces with this enabled will cause pain. > > + > > config DRM_I810 > > tristate "Intel I810" > > depends on DRM && AGP && AGP_INTEL > > @@ -76,6 +84,15 @@ config DRM_I915 > > > > endchoice > > > > +config DRM_I915_KMS > > + bool "Enable modesetting on intel by default" > > + depends on DRM_I915 > > + help > > + Choose this option if you want kernel modesetting enabled by default, > > + and you have a new enough userspace to support this. Running old > > + userspaces with this enabled will cause pain. > > + > > + > > config DRM_MGA > > tristate "Matrox g200/g400" > > depends on DRM > > diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile > > index 74da994..48567a9 100644 > > --- a/drivers/gpu/drm/Makefile > > +++ b/drivers/gpu/drm/Makefile > > @@ -9,7 +9,9 @@ drm-y := drm_auth.o drm_bufs.o drm_cache.o \ > > drm_drv.o drm_fops.o drm_gem.o drm_ioctl.o drm_irq.o \ > > drm_lock.o drm_memory.o drm_proc.o drm_stub.o drm_vm.o \ > > drm_agpsupport.o drm_scatter.o ati_pcigart.o drm_pci.o \ > > - drm_sysfs.o drm_hashtab.o drm_sman.o drm_mm.o > > + drm_sysfs.o drm_hashtab.o drm_sman.o drm_mm.o \ > > + drm_fence.o drm_bo.o drm_ttm.o drm_bo_move.o \ > > + drm_crtc.o drm_crtc_helper.o drm_modes.o drm_edid.o > > > > drm-$(CONFIG_COMPAT) += drm_ioc32.o > > > > diff --git a/drivers/gpu/drm/ati_pcigart.c b/drivers/gpu/drm/ati_pcigart.c > > index c533d0c..adc57dd 100644 > > --- a/drivers/gpu/drm/ati_pcigart.c > > +++ b/drivers/gpu/drm/ati_pcigart.c > > @@ -34,9 +34,55 @@ > > #include "drmP.h" > > > > # define ATI_PCIGART_PAGE_SIZE 4096 /**< PCI GART page size */ > > +# define ATI_PCIGART_PAGE_MASK (~(ATI_PCIGART_PAGE_SIZE-1)) > > > > -static int drm_ati_alloc_pcigart_table(struct drm_device *dev, > > - struct drm_ati_pcigart_info *gart_info) > > +#define ATI_PCIE_WRITE 0x4 > > +#define ATI_PCIE_READ 0x8 > > + > > +static __inline__ void gart_insert_page_into_table(struct drm_ati_pcigart_info *gart_info, dma_addr_t > > addr, volatile u32 *pci_gart) > > +{ > > + u32 page_base; > > + > > + page_base = (u32)addr & ATI_PCIGART_PAGE_MASK; > > + switch(gart_info->gart_reg_if) { > > + case DRM_ATI_GART_IGP: > > + page_base |= (upper_32_bits(addr) & 0xff) << 4; > > + page_base |= 0xc; > > + break; > > + case DRM_ATI_GART_PCIE: > > + page_base >>= 8; > > + page_base |= (upper_32_bits(addr) & 0xff) << 24; > > + page_base |= ATI_PCIE_READ | ATI_PCIE_WRITE; > > + break; > > + default: > > + case DRM_ATI_GART_PCI: > > + break; > > + } > > + *pci_gart = cpu_to_le32(page_base); > > +} > > + > > +static __inline__ dma_addr_t gart_get_page_from_table(struct drm_ati_pcigart_info *gart_info, > > volatile u32 *pci_gart) > > +{ > > + dma_addr_t retval; > > + switch(gart_info->gart_reg_if) { > > + case DRM_ATI_GART_IGP: > > + retval = (*pci_gart & ATI_PCIGART_PAGE_MASK); > > + retval += (((*pci_gart & 0xf0) >> 4) << 16) << 16; > > + break; > > + case DRM_ATI_GART_PCIE: > > + retval = (*pci_gart & ~0xc); > > + retval <<= 8; > > + break; > > + case DRM_ATI_GART_PCI: > > + retval = *pci_gart; > > + break; > > + } > > + > > + return retval; > > +} > > + > > +int drm_ati_alloc_pcigart_table(struct drm_device *dev, > > + struct drm_ati_pcigart_info *gart_info) > > { > > gart_info->table_handle = drm_pci_alloc(dev, gart_info->table_size, > > PAGE_SIZE, > > @@ -44,12 +90,25 @@ static int drm_ati_alloc_pcigart_table(struct drm_device *dev, > > if (gart_info->table_handle == NULL) > > return -ENOMEM; > > > > +#ifdef CONFIG_X86 > > + /* IGPs only exist on x86 in any case */ > > + if (gart_info->gart_reg_if == DRM_ATI_GART_IGP) > > + set_memory_uc((unsigned long)gart_info->table_handle->vaddr, gart_info->table_size >> > > PAGE_SHIFT); > > +#endif > > + > > + memset(gart_info->table_handle->vaddr, 0, gart_info->table_size); > > return 0; > > } > > +EXPORT_SYMBOL(drm_ati_alloc_pcigart_table); > > > > static void drm_ati_free_pcigart_table(struct drm_device *dev, > > struct drm_ati_pcigart_info *gart_info) > > { > > +#ifdef CONFIG_X86 > > + /* IGPs only exist on x86 in any case */ > > + if (gart_info->gart_reg_if == DRM_ATI_GART_IGP) > > + set_memory_wb((unsigned long)gart_info->table_handle->vaddr, gart_info->table_size >> > > PAGE_SHIFT); > > +#endif > > drm_pci_free(dev, gart_info->table_handle); > > gart_info->table_handle = NULL; > > } > > @@ -63,7 +122,6 @@ int drm_ati_pcigart_cleanup(struct drm_device *dev, struct drm_ati_pcigart_info > > > > /* we need to support large memory configurations */ > > if (!entry) { > > - DRM_ERROR("no scatter/gather memory!\n"); > > return 0; > > } > > > > @@ -98,17 +156,14 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga > > struct drm_sg_mem *entry = dev->sg; > > void *address = NULL; > > unsigned long pages; > > - u32 *pci_gart, page_base; > > + u32 *pci_gart; > > dma_addr_t bus_address = 0; > > int i, j, ret = 0; > > int max_pages; > > + dma_addr_t entry_addr; > > > > - if (!entry) { > > - DRM_ERROR("no scatter/gather memory!\n"); > > - goto done; > > - } > > > > - if (gart_info->gart_table_location == DRM_ATI_GART_MAIN) { > > + if (gart_info->gart_table_location == DRM_ATI_GART_MAIN && gart_info->table_handle == NULL) { > > DRM_DEBUG("PCI: no table in VRAM: using normal RAM\n"); > > > > ret = drm_ati_alloc_pcigart_table(dev, gart_info); > > @@ -116,15 +171,19 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga > > DRM_ERROR("cannot allocate PCI GART page!\n"); > > goto done; > > } > > + } > > > > + if (gart_info->gart_table_location == DRM_ATI_GART_MAIN) { > > address = gart_info->table_handle->vaddr; > > bus_address = gart_info->table_handle->busaddr; > > } else { > > address = gart_info->addr; > > bus_address = gart_info->bus_addr; > > - DRM_DEBUG("PCI: Gart Table: VRAM %08LX mapped at %08lX\n", > > - (unsigned long long)bus_address, > > - (unsigned long)address); > > + } > > + > > + if (!entry) { > > + DRM_ERROR("no scatter/gather memory!\n"); > > + goto done; > > } > > > > pci_gart = (u32 *) address; > > @@ -133,8 +192,6 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga > > pages = (entry->pages <= max_pages) > > ? entry->pages : max_pages; > > > > - memset(pci_gart, 0, max_pages * sizeof(u32)); > > - > > for (i = 0; i < pages; i++) { > > /* we need to support large memory configurations */ > > entry->busaddr[i] = pci_map_page(dev->pdev, entry->pagelist[i], > > @@ -146,32 +203,18 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga > > bus_address = 0; > > goto done; > > } > > - page_base = (u32) entry->busaddr[i]; > > > > + entry_addr = entry->busaddr[i]; > > for (j = 0; j < (PAGE_SIZE / ATI_PCIGART_PAGE_SIZE); j++) { > > - switch(gart_info->gart_reg_if) { > > - case DRM_ATI_GART_IGP: > > - *pci_gart = cpu_to_le32((page_base) | 0xc); > > - break; > > - case DRM_ATI_GART_PCIE: > > - *pci_gart = cpu_to_le32((page_base >> 8) | 0xc); > > - break; > > - default: > > - case DRM_ATI_GART_PCI: > > - *pci_gart = cpu_to_le32(page_base); > > - break; > > - } > > + gart_insert_page_into_table(gart_info, entry_addr, pci_gart); > > pci_gart++; > > - page_base += ATI_PCIGART_PAGE_SIZE; > > + entry_addr += ATI_PCIGART_PAGE_SIZE; > > } > > } > > + > > ret = 1; > > > > -#if defined(__i386__) || defined(__x86_64__) > > - wbinvd(); > > -#else > > mb(); > > -#endif > > > > done: > > gart_info->addr = address; > > @@ -179,3 +222,142 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga > > return ret; > > } > > EXPORT_SYMBOL(drm_ati_pcigart_init); > > + > > +static int ati_pcigart_needs_unbind_cache_adjust(struct drm_ttm_backend *backend) > > +{ > > + return ((backend->flags & DRM_BE_FLAG_BOUND_CACHED) ? 0 : 1); > > +} > > + > > +static int ati_pcigart_populate(struct drm_ttm_backend *backend, > > + unsigned long num_pages, > > + struct page **pages, > > + struct page *dummy_read_page) > > +{ > > + struct ati_pcigart_ttm_backend *atipci_be = > > + container_of(backend, struct ati_pcigart_ttm_backend, backend); > > + > > + atipci_be->pages = pages; > > + atipci_be->num_pages = num_pages; > > + atipci_be->populated = 1; > > + return 0; > > +} > > + > > +static int ati_pcigart_bind_ttm(struct drm_ttm_backend *backend, > > + struct drm_bo_mem_reg *bo_mem) > > +{ > > + struct ati_pcigart_ttm_backend *atipci_be = > > + container_of(backend, struct ati_pcigart_ttm_backend, backend); > > + off_t j; > > + int i; > > + struct drm_ati_pcigart_info *info = atipci_be->gart_info; > > + volatile u32 *pci_gart; > > + dma_addr_t offset = bo_mem->mm_node->start; > > + dma_addr_t page_base; > > + > > + pci_gart = info->addr; > > + > > + j = offset; > > + while (j < (offset + atipci_be->num_pages)) { > > + if (gart_get_page_from_table(info, pci_gart + j)) > > + return -EBUSY; > > + j++; > > + } > > + > > + for (i = 0, j = offset; i < atipci_be->num_pages; i++, j++) { > > + struct page *cur_page = atipci_be->pages[i]; > > + /* write value */ > > + page_base = page_to_phys(cur_page); > > + gart_insert_page_into_table(info, page_base, pci_gart + j); > > + } > > + > > + mb(); > > + atipci_be->gart_flush_fn(atipci_be->dev); > > + > > + atipci_be->bound = 1; > > + atipci_be->offset = offset; > > + /* need to traverse table and add entries */ > > + DRM_DEBUG("\n"); > > + return 0; > > +} > > + > > +static int ati_pcigart_unbind_ttm(struct drm_ttm_backend *backend) > > +{ > > + struct ati_pcigart_ttm_backend *atipci_be = > > + container_of(backend, struct ati_pcigart_ttm_backend, backend); > > + struct drm_ati_pcigart_info *info = atipci_be->gart_info; > > + unsigned long offset = atipci_be->offset; > > + int i; > > + off_t j; > > + volatile u32 *pci_gart = info->addr; > > + > > + if (atipci_be->bound != 1) > > + return -EINVAL; > > + > > + for (i = 0, j = offset; i < atipci_be->num_pages; i++, j++) { > > + *(pci_gart + j) = 0; > > + } > > + > > + mb(); > > + atipci_be->gart_flush_fn(atipci_be->dev); > > + atipci_be->bound = 0; > > + atipci_be->offset = 0; > > + return 0; > > +} > > + > > +static void ati_pcigart_clear_ttm(struct drm_ttm_backend *backend) > > +{ > > + struct ati_pcigart_ttm_backend *atipci_be = > > + container_of(backend, struct ati_pcigart_ttm_backend, backend); > > + > > + DRM_DEBUG("\n"); > > + if (atipci_be->pages) { > > + backend->func->unbind(backend); > > + atipci_be->pages = NULL; > > + > > + } > > + atipci_be->num_pages = 0; > > +} > > + > > +static void ati_pcigart_destroy_ttm(struct drm_ttm_backend *backend) > > +{ > > + struct ati_pcigart_ttm_backend *atipci_be; > > + if (backend) { > > + DRM_DEBUG("\n"); > > + atipci_be = container_of(backend, struct ati_pcigart_ttm_backend, backend); > > + if (atipci_be) { > > + if (atipci_be->pages) { > > + backend->func->clear(backend); > > + } > > + drm_ctl_free(atipci_be, sizeof(*atipci_be), DRM_MEM_TTM); > > + } > > + } > > +} > > + > > +static struct drm_ttm_backend_func ati_pcigart_ttm_backend = > > +{ > > + .needs_ub_cache_adjust = ati_pcigart_needs_unbind_cache_adjust, > > + .populate = ati_pcigart_populate, > > + .clear = ati_pcigart_clear_ttm, > > + .bind = ati_pcigart_bind_ttm, > > + .unbind = ati_pcigart_unbind_ttm, > > + .destroy = ati_pcigart_destroy_ttm, > > +}; > > + > > +struct drm_ttm_backend *ati_pcigart_init_ttm(struct drm_device *dev, struct drm_ati_pcigart_info > > *info, void (*gart_flush_fn)(struct drm_device *dev)) > > +{ > > + struct ati_pcigart_ttm_backend *atipci_be; > > + > > + atipci_be = drm_ctl_calloc(1, sizeof (*atipci_be), DRM_MEM_TTM); > > + if (!atipci_be) > > + return NULL; > > + > > + atipci_be->populated = 0; > > + atipci_be->backend.func = &ati_pcigart_ttm_backend; > > +// atipci_be->backend.mem_type = DRM_BO_MEM_TT; > > + atipci_be->gart_info = info; > > + atipci_be->gart_flush_fn = gart_flush_fn; > > + atipci_be->dev = dev; > > + > > + return &atipci_be->backend; > > +} > > +EXPORT_SYMBOL(ati_pcigart_init_ttm); > > diff --git a/drivers/gpu/drm/drm_agpsupport.c b/drivers/gpu/drm/drm_agpsupport.c > > index 3d33b82..e048aa2 100644 > > --- a/drivers/gpu/drm/drm_agpsupport.c > > +++ b/drivers/gpu/drm/drm_agpsupport.c > > @@ -496,6 +496,177 @@ drm_agp_bind_pages(struct drm_device *dev, > > } > > EXPORT_SYMBOL(drm_agp_bind_pages); > > > > +/* > > + * AGP ttm backend interface. > > + */ > > + > > +#ifndef AGP_USER_TYPES > > +#define AGP_USER_TYPES (1 << 16) > > +#define AGP_USER_MEMORY (AGP_USER_TYPES) > > +#define AGP_USER_CACHED_MEMORY (AGP_USER_TYPES + 1) > > +#endif > > +#define AGP_REQUIRED_MAJOR 0 > > +#define AGP_REQUIRED_MINOR 102 > > + > > +static int drm_agp_needs_unbind_cache_adjust(struct drm_ttm_backend *backend) > > +{ > > + return ((backend->flags & DRM_BE_FLAG_BOUND_CACHED) ? 0 : 1); > > +} > > + > > + > > +static int drm_agp_populate(struct drm_ttm_backend *backend, > > + unsigned long num_pages, struct page **pages, > > + struct page *dummy_read_page) > > +{ > > + struct drm_agp_ttm_backend *agp_be = > > + container_of(backend, struct drm_agp_ttm_backend, backend); > > + struct page **cur_page, **last_page = pages + num_pages; > > + DRM_AGP_MEM *mem; > > + int dummy_page_count = 0; > > + > > + if (drm_alloc_memctl(num_pages * sizeof(void *))) > > + return -1; > > + > > + DRM_DEBUG("drm_agp_populate_ttm\n"); > > + mem = drm_agp_allocate_memory(agp_be->bridge, num_pages, AGP_USER_MEMORY); > > + if (!mem) { > > + drm_free_memctl(num_pages * sizeof(void *)); > > + return -1; > > + } > > + > > + DRM_DEBUG("Current page count is %ld\n", (long) mem->page_count); > > + mem->page_count = 0; > > + for (cur_page = pages; cur_page < last_page; ++cur_page) { > > + struct page *page = *cur_page; > > + if (!page) { > > + page = dummy_read_page; > > + ++dummy_page_count; > > + } > > + mem->memory[mem->page_count++] = phys_to_gart(page_to_phys(page)); > > + } > > + if (dummy_page_count) > > + DRM_DEBUG("Mapped %d dummy pages\n", dummy_page_count); > > + agp_be->mem = mem; > > + return 0; > > +} > > + > > +static int drm_agp_bind_ttm(struct drm_ttm_backend *backend, > > + struct drm_bo_mem_reg *bo_mem) > > +{ > > + struct drm_agp_ttm_backend *agp_be = > > + container_of(backend, struct drm_agp_ttm_backend, backend); > > + DRM_AGP_MEM *mem = agp_be->mem; > > + int ret; > > + int snooped = (bo_mem->flags & DRM_BO_FLAG_CACHED) && !(bo_mem->flags & > > DRM_BO_FLAG_CACHED_MAPPED); > > + > > + DRM_DEBUG("drm_agp_bind_ttm\n"); > > + mem->is_flushed = true; > > + mem->type = AGP_USER_MEMORY; > > + /* CACHED MAPPED implies not snooped memory */ > > + if (snooped) > > + mem->type = AGP_USER_CACHED_MEMORY; > > + > > + ret = drm_agp_bind_memory(mem, bo_mem->mm_node->start); > > + if (ret) > > + DRM_ERROR("AGP Bind memory failed\n"); > > + > > + DRM_FLAG_MASKED(backend->flags, (bo_mem->flags & DRM_BO_FLAG_CACHED) ? > > + DRM_BE_FLAG_BOUND_CACHED : 0, > > + DRM_BE_FLAG_BOUND_CACHED); > > + return ret; > > +} > > + > > +static int drm_agp_unbind_ttm(struct drm_ttm_backend *backend) > > +{ > > + struct drm_agp_ttm_backend *agp_be = > > + container_of(backend, struct drm_agp_ttm_backend, backend); > > + > > + DRM_DEBUG("drm_agp_unbind_ttm\n"); > > + if (agp_be->mem->is_bound) > > + return drm_agp_unbind_memory(agp_be->mem); > > + else > > + return 0; > > +} > > + > > +static void drm_agp_clear_ttm(struct drm_ttm_backend *backend) > > +{ > > + struct drm_agp_ttm_backend *agp_be = > > + container_of(backend, struct drm_agp_ttm_backend, backend); > > + DRM_AGP_MEM *mem = agp_be->mem; > > + > > + DRM_DEBUG("drm_agp_clear_ttm\n"); > > + if (mem) { > > + unsigned long num_pages = mem->page_count; > > + backend->func->unbind(backend); > > + agp_free_memory(mem); > > + drm_free_memctl(num_pages * sizeof(void *)); > > + } > > + agp_be->mem = NULL; > > +} > > + > > +static void drm_agp_destroy_ttm(struct drm_ttm_backend *backend) > > +{ > > + struct drm_agp_ttm_backend *agp_be; > > + > > + if (backend) { > > + DRM_DEBUG("drm_agp_destroy_ttm\n"); > > + agp_be = container_of(backend, struct drm_agp_ttm_backend, backend); > > + if (agp_be) { > > + if (agp_be->mem) > > + backend->func->clear(backend); > > + drm_ctl_free(agp_be, sizeof(*agp_be), DRM_MEM_TTM); > > + } > > + } > > +} > > + > > +static struct drm_ttm_backend_func agp_ttm_backend = { > > + .needs_ub_cache_adjust = drm_agp_needs_unbind_cache_adjust, > > + .populate = drm_agp_populate, > > + .clear = drm_agp_clear_ttm, > > + .bind = drm_agp_bind_ttm, > > + .unbind = drm_agp_unbind_ttm, > > + .destroy = drm_agp_destroy_ttm, > > +}; > > + > > +struct drm_ttm_backend *drm_agp_init_ttm(struct drm_device *dev) > > +{ > > + > > + struct drm_agp_ttm_backend *agp_be; > > + struct agp_kern_info *info; > > + > > + if (!dev->agp) { > > + DRM_ERROR("AGP is not initialized.\n"); > > + return NULL; > > + } > > + info = &dev->agp->agp_info; > > + > > + if (info->version.major != AGP_REQUIRED_MAJOR || > > + info->version.minor < AGP_REQUIRED_MINOR) { > > + DRM_ERROR("Wrong agpgart version %d.%d\n" > > + "\tYou need at least version %d.%d.\n", > > + info->version.major, > > + info->version.minor, > > + AGP_REQUIRED_MAJOR, > > + AGP_REQUIRED_MINOR); > > + return NULL; > > + } > > + > > + > > + agp_be = drm_ctl_calloc(1, sizeof(*agp_be), DRM_MEM_TTM); > > + if (!agp_be) > > + return NULL; > > + > > + agp_be->mem = NULL; > > + > > + agp_be->bridge = dev->agp->bridge; > > + agp_be->populated = false; > > + agp_be->backend.func = &agp_ttm_backend; > > + agp_be->backend.dev = dev; > > + > > + return &agp_be->backend; > > +} > > +EXPORT_SYMBOL(drm_agp_init_ttm); > > + > > void drm_agp_chipset_flush(struct drm_device *dev) > > { > > agp_flush_chipset(dev->agp->bridge); > > diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c > > index a734627..ca7a9ef 100644 > > --- a/drivers/gpu/drm/drm_auth.c > > +++ b/drivers/gpu/drm/drm_auth.c > > @@ -45,14 +45,15 @@ > > * the one with matching magic number, while holding the drm_device::struct_mutex > > * lock. > > */ > > -static struct drm_file *drm_find_file(struct drm_device * dev, drm_magic_t magic) > > +static struct drm_file *drm_find_file(struct drm_master *master, drm_magic_t magic) > > { > > struct drm_file *retval = NULL; > > struct drm_magic_entry *pt; > > struct drm_hash_item *hash; > > + struct drm_device *dev = master->minor->dev; > > > > mutex_lock(&dev->struct_mutex); > > - if (!drm_ht_find_item(&dev->magiclist, (unsigned long)magic, &hash)) { > > + if (!drm_ht_find_item(&master->magiclist, (unsigned long)magic, &hash)) { > > pt = drm_hash_entry(hash, struct drm_magic_entry, hash_item); > > retval = pt->priv; > > } > > @@ -71,11 +72,11 @@ static struct drm_file *drm_find_file(struct drm_device * dev, drm_magic_t magic > > * associated the magic number hash key in drm_device::magiclist, while holding > > * the drm_device::struct_mutex lock. > > */ > > -static int drm_add_magic(struct drm_device * dev, struct drm_file * priv, > > +static int drm_add_magic(struct drm_master *master, struct drm_file *priv, > > drm_magic_t magic) > > { > > struct drm_magic_entry *entry; > > - > > + struct drm_device *dev = master->minor->dev; > > DRM_DEBUG("%d\n", magic); > > > > entry = drm_alloc(sizeof(*entry), DRM_MEM_MAGIC); > > @@ -83,11 +84,10 @@ static int drm_add_magic(struct drm_device * dev, struct drm_file * priv, > > return -ENOMEM; > > memset(entry, 0, sizeof(*entry)); > > entry->priv = priv; > > - > > entry->hash_item.key = (unsigned long)magic; > > mutex_lock(&dev->struct_mutex); > > - drm_ht_insert_item(&dev->magiclist, &entry->hash_item); > > - list_add_tail(&entry->head, &dev->magicfree); > > + drm_ht_insert_item(&master->magiclist, &entry->hash_item); > > + list_add_tail(&entry->head, &master->magicfree); > > mutex_unlock(&dev->struct_mutex); > > > > return 0; > > @@ -102,20 +102,21 @@ static int drm_add_magic(struct drm_device * dev, struct drm_file * priv, > > * Searches and unlinks the entry in drm_device::magiclist with the magic > > * number hash key, while holding the drm_device::struct_mutex lock. > > */ > > -static int drm_remove_magic(struct drm_device * dev, drm_magic_t magic) > > +static int drm_remove_magic(struct drm_master *master, drm_magic_t magic) > > { > > struct drm_magic_entry *pt; > > struct drm_hash_item *hash; > > + struct drm_device *dev = master->minor->dev; > > > > DRM_DEBUG("%d\n", magic); > > > > mutex_lock(&dev->struct_mutex); > > - if (drm_ht_find_item(&dev->magiclist, (unsigned long)magic, &hash)) { > > + if (drm_ht_find_item(&master->magiclist, (unsigned long)magic, &hash)) { > > mutex_unlock(&dev->struct_mutex); > > return -EINVAL; > > } > > pt = drm_hash_entry(hash, struct drm_magic_entry, hash_item); > > - drm_ht_remove_item(&dev->magiclist, hash); > > + drm_ht_remove_item(&master->magiclist, hash); > > list_del(&pt->head); > > mutex_unlock(&dev->struct_mutex); > > > > @@ -153,9 +154,9 @@ int drm_getmagic(struct drm_device *dev, void *data, struct drm_file *file_priv) > > ++sequence; /* reserve 0 */ > > auth->magic = sequence++; > > spin_unlock(&lock); > > - } while (drm_find_file(dev, auth->magic)); > > + } while (drm_find_file(file_priv->master, auth->magic)); > > file_priv->magic = auth->magic; > > - drm_add_magic(dev, file_priv, auth->magic); > > + drm_add_magic(file_priv->master, file_priv, auth->magic); > > } > > > > DRM_DEBUG("%u\n", auth->magic); > > @@ -181,9 +182,9 @@ int drm_authmagic(struct drm_device *dev, void *data, > > struct drm_file *file; > > > > DRM_DEBUG("%u\n", auth->magic); > > - if ((file = drm_find_file(dev, auth->magic))) { > > + if ((file = drm_find_file(file_priv->master, auth->magic))) { > > file->authenticated = 1; > > - drm_remove_magic(dev, auth->magic); > > + drm_remove_magic(file_priv->master, auth->magic); > > return 0; > > } > > return -EINVAL; > > diff --git a/drivers/gpu/drm/drm_bo.c b/drivers/gpu/drm/drm_bo.c > > new file mode 100644 > > index 0000000..5cec5a0 > > --- /dev/null > > +++ b/drivers/gpu/drm/drm_bo.c > > @@ -0,0 +1,2116 @@ > > +/************************************************************************** > > + * > > + * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA > > + * All Rights Reserved. > > + * > > + * Permission is hereby granted, free of charge, to any person obtaining a > > + * copy of this software and associated documentation files (the > > + * "Software"), to deal in the Software without restriction, including > > + * without limitation the rights to use, copy, modify, merge, publish, > > + * distribute, sub license, and/or sell copies of the Software, and to > > + * permit persons to whom the Software is furnished to do so, subject to > > + * the following conditions: > > + * > > + * The above copyright notice and this permission notice (including the > > + * next paragraph) shall be included in all copies or substantial portions > > + * of the Software. > > + * > > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR > > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, > > + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL > > + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, > > + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR > > + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE > > + * USE OR OTHER DEALINGS IN THE SOFTWARE. > > + * > > + **************************************************************************/ > > +/* > > + * Authors: Thomas Hellstr�m <thomas-at-tungstengraphics-dot-com> > > + */ > > + > > +#include "drmP.h" > > + > > +/* > > + * Locking may look a bit complicated but isn't really: > > + * > > + * The buffer usage atomic_t needs to be protected by dev->struct_mutex > > + * when there is a chance that it can be zero before or after the operation. > > + * > > + * dev->struct_mutex also protects all lists and list heads, > > + * Hash tables and hash heads. > > + * > > + * bo->mutex protects the buffer object itself excluding the usage field. > > + * bo->mutex does also protect the buffer list heads, so to manipulate those, > > + * we need both the bo->mutex and the dev->struct_mutex. > > + * > > + * Locking order is bo->mutex, dev->struct_mutex. Therefore list traversal > > + * is a bit complicated. When dev->struct_mutex is released to grab bo->mutex, > > + * the list traversal will, in general, need to be restarted. > > + * > > + */ > > + > > +static void drm_bo_destroy_locked(struct drm_buffer_object *bo); > > +static int drm_bo_setup_vm_locked(struct drm_buffer_object *bo); > > +static void drm_bo_unmap_virtual(struct drm_buffer_object *bo); > > + > > +static inline uint64_t drm_bo_type_flags(unsigned type) > > +{ > > + return (1ULL << (24 + type)); > > +} > > + > > +/* > > + * bo locked. dev->struct_mutex locked. > > + */ > > + > > +void drm_bo_add_to_pinned_lru(struct drm_buffer_object *bo) > > +{ > > + struct drm_mem_type_manager *man; > > + > > + DRM_ASSERT_LOCKED(&bo->dev->struct_mutex); > > + DRM_ASSERT_LOCKED(&bo->mutex); > > + > > + man = &bo->dev->bm.man[bo->pinned_mem_type]; > > + list_add_tail(&bo->pinned_lru, &man->pinned); > > +} > > + > > +void drm_bo_add_to_lru(struct drm_buffer_object *bo) > > +{ > > + struct drm_mem_type_manager *man; > > + > > + DRM_ASSERT_LOCKED(&bo->dev->struct_mutex); > > + > > + if (!(bo->mem.proposed_flags & (DRM_BO_FLAG_NO_MOVE | DRM_BO_FLAG_NO_EVICT)) > > + || bo->mem.mem_type != bo->pinned_mem_type) { > > + man = &bo->dev->bm.man[bo->mem.mem_type]; > > + list_add_tail(&bo->lru, &man->lru); > > + } else { > > + INIT_LIST_HEAD(&bo->lru); > > + } > > +} > > + > > +static int drm_bo_vm_pre_move(struct drm_buffer_object *bo, int old_is_pci) > > +{ > > +#ifdef DRM_ODD_MM_COMPAT > > + int ret; > > + > > + if (!bo->map_list.map) > > + return 0; > > + > > + ret = drm_bo_lock_kmm(bo); > > + if (ret) > > + return ret; > > + drm_bo_unmap_virtual(bo); > > + if (old_is_pci) > > + drm_bo_finish_unmap(bo); > > +#else > > + if (!bo->map_list.map) > > + return 0; > > + > > + drm_bo_unmap_virtual(bo); > > +#endif > > + return 0; > > +} > > + > > +static void drm_bo_vm_post_move(struct drm_buffer_object *bo) > > +{ > > +#ifdef DRM_ODD_MM_COMPAT > > + int ret; > > + > > + if (!bo->map_list.map) > > + return; > > + > > + ret = drm_bo_remap_bound(bo); > > + if (ret) { > > + DRM_ERROR("Failed to remap a bound buffer object.\n" > > + "\tThis might cause a sigbus later.\n"); > > + } > > + drm_bo_unlock_kmm(bo); > > +#endif > > +} > > + > > +/* > > + * Call bo->mutex locked. > > + */ > > + > > +int drm_bo_add_ttm(struct drm_buffer_object *bo) > > +{ > > + struct drm_device *dev = bo->dev; > > + int ret = 0; > > + uint32_t page_flags = 0; > > + > > + DRM_ASSERT_LOCKED(&bo->mutex); > > + bo->ttm = NULL; > > + > > + if (bo->mem.proposed_flags & DRM_BO_FLAG_WRITE) > > + page_flags |= DRM_TTM_PAGE_WRITE; > > + > > + switch (bo->type) { > > + case drm_bo_type_device: > > + case drm_bo_type_kernel: > > + bo->ttm = drm_ttm_create(dev, bo->num_pages << PAGE_SHIFT, > > + page_flags, dev->bm.dummy_read_page); > > + if (!bo->ttm) > > + ret = -ENOMEM; > > + break; > > + case drm_bo_type_user: > > + bo->ttm = drm_ttm_create(dev, bo->num_pages << PAGE_SHIFT, > > + page_flags | DRM_TTM_PAGE_USER, > > + dev->bm.dummy_read_page); > > + if (!bo->ttm) > > + ret = -ENOMEM; > > + > > + ret = drm_ttm_set_user(bo->ttm, current, > > + bo->buffer_start, > > + bo->num_pages); > > + if (ret) > > + return ret; > > + > > + break; > > + default: > > + DRM_ERROR("Illegal buffer object type\n"); > > + ret = -EINVAL; > > + break; > > + } > > + > > + return ret; > > +} > > +EXPORT_SYMBOL(drm_bo_add_ttm); > > + > > +static int drm_bo_handle_move_mem(struct drm_buffer_object *bo, > > + struct drm_bo_mem_reg *mem, > > + int evict, int no_wait) > > +{ > > + struct drm_device *dev = bo->dev; > > + struct drm_buffer_manager *bm = &dev->bm; > > + int old_is_pci = drm_mem_reg_is_pci(dev, &bo->mem); > > + int new_is_pci = drm_mem_reg_is_pci(dev, mem); > > + struct drm_mem_type_manager *old_man = &bm->man[bo->mem.mem_type]; > > + struct drm_mem_type_manager *new_man = &bm->man[mem->mem_type]; > > + int ret = 0; > > + > > + if (old_is_pci || new_is_pci || > > + ((mem->flags ^ bo->mem.flags) & DRM_BO_FLAG_CACHED)) > > + ret = drm_bo_vm_pre_move(bo, old_is_pci); > > + if (ret) > > + return ret; > > + > > + /* > > + * Create and bind a ttm if required. > > + */ > > + > > + if (!(new_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && (bo->ttm == NULL)) { > > + ret = drm_bo_add_ttm(bo); > > + if (ret) > > + goto out_err; > > + > > + if (mem->mem_type != DRM_BO_MEM_LOCAL) { > > + ret = drm_ttm_bind(bo->ttm, mem); > > + if (ret) > > + goto out_err; > > + } > > + > > + if (bo->mem.mem_type == DRM_BO_MEM_LOCAL) { > > + > > + struct drm_bo_mem_reg *old_mem = &bo->mem; > > + uint64_t save_flags = old_mem->flags; > > + uint64_t save_proposed_flags = old_mem->proposed_flags; > > + > > + *old_mem = *mem; > > + mem->mm_node = NULL; > > + old_mem->proposed_flags = save_proposed_flags; > > + DRM_FLAG_MASKED(save_flags, mem->flags, > > + DRM_BO_MASK_MEMTYPE); > > + goto moved; > > + } > > + > > + } > > + > > + if (!(old_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && > > + !(new_man->flags & _DRM_FLAG_MEMTYPE_FIXED)) > > + ret = drm_bo_move_ttm(bo, evict, no_wait, mem); > > + else if (dev->driver->bo_driver->move) > > + ret = dev->driver->bo_driver->move(bo, evict, no_wait, mem); > > + else > > + ret = drm_bo_move_memcpy(bo, evict, no_wait, mem); > > + > > + if (ret) > > + goto out_err; > > + > > +moved: > > + if (old_is_pci || new_is_pci) > > + drm_bo_vm_post_move(bo); > > + > > + if (bo->priv_flags & _DRM_BO_FLAG_EVICTED) { > > + ret = > > + dev->driver->bo_driver->invalidate_caches(dev, > > + bo->mem.flags); > > + if (ret) > > + DRM_ERROR("Can not flush read caches\n"); > > + } > > + > > + DRM_FLAG_MASKED(bo->priv_flags, > > + (evict) ? _DRM_BO_FLAG_EVICTED : 0, > > + _DRM_BO_FLAG_EVICTED); > > + > > + if (bo->mem.mm_node) > > + bo->offset = (bo->mem.mm_node->start << PAGE_SHIFT) + > > + bm->man[bo->mem.mem_type].gpu_offset; > > + > > + > > + return 0; > > + > > +out_err: > > + if (old_is_pci || new_is_pci) > > + drm_bo_vm_post_move(bo); > > + > > + new_man = &bm->man[bo->mem.mem_type]; > > + if ((new_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && bo->ttm) { > > + drm_ttm_unbind(bo->ttm); > > + drm_ttm_destroy(bo->ttm); > > + bo->ttm = NULL; > > + } > > + > > + return ret; > > +} > > + > > +/* > > + * Call bo->mutex locked. > > + * Returns -EBUSY if the buffer is currently rendered to or from. 0 otherwise. > > + */ > > + > > +static int drm_bo_busy(struct drm_buffer_object *bo, int check_unfenced) > > +{ > > + struct drm_fence_object *fence = bo->fence; > > + > > + if (check_unfenced && (bo->priv_flags & _DRM_BO_FLAG_UNFENCED)) > > + return -EBUSY; > > + > > + if (fence) { > > + if (drm_fence_object_signaled(fence, bo->fence_type)) { > > + drm_fence_usage_deref_unlocked(&bo->fence); > > + return 0; > > + } > > + drm_fence_object_flush(fence, DRM_FENCE_TYPE_EXE); > > + if (drm_fence_object_signaled(fence, bo->fence_type)) { > > + drm_fence_usage_deref_unlocked(&bo->fence); > > + return 0; > > + } > > + return -EBUSY; > > + } > > + return 0; > > +} > > + > > +static int drm_bo_check_unfenced(struct drm_buffer_object *bo) > > +{ > > + int ret; > > + > > + mutex_lock(&bo->mutex); > > + ret = (bo->priv_flags & _DRM_BO_FLAG_UNFENCED); > > + mutex_unlock(&bo->mutex); > > + return ret; > > +} > > + > > + > > +/* > > + * Call bo->mutex locked. > > + * Wait until the buffer is idle. > > + */ > > + > > +int drm_bo_wait(struct drm_buffer_object *bo, int lazy, int interruptible, > > + int no_wait, int check_unfenced) > > +{ > > + int ret; > > + > > + DRM_ASSERT_LOCKED(&bo->mutex); > > + while(unlikely(drm_bo_busy(bo, check_unfenced))) { > > + if (no_wait) > > + return -EBUSY; > > + > > + if (check_unfenced && (bo->priv_flags & _DRM_BO_FLAG_UNFENCED)) { > > + mutex_unlock(&bo->mutex); > > + wait_event(bo->event_queue, !drm_bo_check_unfenced(bo)); > > + mutex_lock(&bo->mutex); > > + bo->priv_flags |= _DRM_BO_FLAG_UNLOCKED; > > + } > > + > > + if (bo->fence) { > > + struct drm_fence_object *fence; > > + uint32_t fence_type = bo->fence_type; > > + > > + drm_fence_reference_unlocked(&fence, bo->fence); > > + mutex_unlock(&bo->mutex); > > + > > + ret = drm_fence_object_wait(fence, lazy, !interruptible, > > + fence_type); > > + > > + drm_fence_usage_deref_unlocked(&fence); > > + mutex_lock(&bo->mutex); > > + bo->priv_flags |= _DRM_BO_FLAG_UNLOCKED; > > + if (ret) > > + return ret; > > + } > > + > > + } > > + return 0; > > +} > > +EXPORT_SYMBOL(drm_bo_wait); > > + > > +static int drm_bo_expire_fence(struct drm_buffer_object *bo, int allow_errors) > > +{ > > + struct drm_device *dev = bo->dev; > > + struct drm_buffer_manager *bm = &dev->bm; > > + > > + if (bo->fence) { > > + if (bm->nice_mode) { > > + unsigned long _end = jiffies + 3 * DRM_HZ; > > + int ret; > > + do { > > + ret = drm_bo_wait(bo, 0, 0, 0, 0); > > + if (ret && allow_errors) > > + return ret; > > + > > + } while (ret && !time_after_eq(jiffies, _end)); > > + > > + if (bo->fence) { > > + bm->nice_mode = 0; > > + DRM_ERROR("Detected GPU lockup or " > > + "fence driver was taken down. " > > + "Evicting buffer.\n"); > > + } > > + } > > + if (bo->fence) > > + drm_fence_usage_deref_unlocked(&bo->fence); > > + } > > + return 0; > > +} > > + > > +/* > > + * Call dev->struct_mutex locked. > > + * Attempts to remove all private references to a buffer by expiring its > > + * fence object and removing from lru lists and memory managers. > > + */ > > + > > +static void drm_bo_cleanup_refs(struct drm_buffer_object *bo, int remove_all) > > +{ > > + struct drm_device *dev = bo->dev; > > + struct drm_buffer_manager *bm = &dev->bm; > > + > > + DRM_ASSERT_LOCKED(&dev->struct_mutex); > > + > > + atomic_inc(&bo->usage); > > + mutex_unlock(&dev->struct_mutex); > > + mutex_lock(&bo->mutex); > > + > > + DRM_FLAG_MASKED(bo->priv_flags, 0, _DRM_BO_FLAG_UNFENCED); > > + > > + if (bo->fence && drm_fence_object_signaled(bo->fence, > > + bo->fence_type)) > > + drm_fence_usage_deref_unlocked(&bo->fence); > > + > > + if (bo->fence && remove_all) > > + (void)drm_bo_expire_fence(bo, 0); > > + > > + mutex_lock(&dev->struct_mutex); > > + > > + if (!atomic_dec_and_test(&bo->usage)) > > + goto out; > > + > > + if (!bo->fence) { > > + list_del_init(&bo->lru); > > + if (bo->mem.mm_node) { > > + drm_mm_put_block(bo->mem.mm_node); > > + if (bo->pinned_node == bo->mem.mm_node) > > + bo->pinned_node = NULL; > > + bo->mem.mm_node = NULL; > > + } > > + list_del_init(&bo->pinned_lru); > > + if (bo->pinned_node) { > > + drm_mm_put_block(bo->pinned_node); > > + bo->pinned_node = NULL; > > + } > > + list_del_init(&bo->ddestroy); > > + mutex_unlock(&bo->mutex); > > + drm_bo_destroy_locked(bo); > > + return; > > + } > > + > > + if (list_empty(&bo->ddestroy)) { > > + drm_fence_object_flush(bo->fence, bo->fence_type); > > + list_add_tail(&bo->ddestroy, &bm->ddestroy); > > + schedule_delayed_work(&bm->wq, > > + ((DRM_HZ / 100) < 1) ? 1 : DRM_HZ / 100); > > + } > > + > > +out: > > + mutex_unlock(&bo->mutex); > > + return; > > +} > > + > > +/* > > + * Verify that refcount is 0 and that there are no internal references > > + * to the buffer object. Then destroy it. > > + */ > > + > > +static void drm_bo_destroy_locked(struct drm_buffer_object *bo) > > +{ > > + struct drm_device *dev = bo->dev; > > + struct drm_buffer_manager *bm = &dev->bm; > > + > > + DRM_ASSERT_LOCKED(&dev->struct_mutex); > > + > > + DRM_DEBUG("freeing %p\n", bo); > > + if (list_empty(&bo->lru) && bo->mem.mm_node == NULL && > > + list_empty(&bo->pinned_lru) && bo->pinned_node == NULL && > > + list_empty(&bo->ddestroy) && atomic_read(&bo->usage) == 0) { > > + if (bo->fence != NULL) { > > + DRM_ERROR("Fence was non-zero.\n"); > > + drm_bo_cleanup_refs(bo, 0); > > + return; > > + } > > + > > +#ifdef DRM_ODD_MM_COMPAT > > + BUG_ON(!list_empty(&bo->vma_list)); > > + BUG_ON(!list_empty(&bo->p_mm_list)); > > +#endif > > + > > + if (bo->ttm) { > > + drm_ttm_unbind(bo->ttm); > > + drm_ttm_destroy(bo->ttm); > > + bo->ttm = NULL; > > + } > > + > > + atomic_dec(&bm->count); > > + > > + drm_ctl_free(bo, sizeof(*bo), DRM_MEM_BUFOBJ); > > + > > + return; > > + } > > + > > + /* > > + * Some stuff is still trying to reference the buffer object. > > + * Get rid of those references. > > + */ > > + > > + drm_bo_cleanup_refs(bo, 0); > > + > > + return; > > +} > > + > > +/* > > + * Call dev->struct_mutex locked. > > + */ > > + > > +static void drm_bo_delayed_delete(struct drm_device *dev, int remove_all) > > +{ > > + struct drm_buffer_manager *bm = &dev->bm; > > + > > + struct drm_buffer_object *entry, *nentry; > > + struct list_head *list, *next; > > + > > + list_for_each_safe(list, next, &bm->ddestroy) { > > + entry = list_entry(list, struct drm_buffer_object, ddestroy); > > + > > + nentry = NULL; > > + DRM_DEBUG("bo is %p, %d\n", entry, entry->num_pages); > > + if (next != &bm->ddestroy) { > > + nentry = list_entry(next, struct drm_buffer_object, > > + ddestroy); > > + atomic_inc(&nentry->usage); > > + } > > + > > + drm_bo_cleanup_refs(entry, remove_all); > > + > > + if (nentry) > > + atomic_dec(&nentry->usage); > > + } > > +} > > + > > +static void drm_bo_delayed_workqueue(struct work_struct *work) > > +{ > > + struct drm_buffer_manager *bm = > > + container_of(work, struct drm_buffer_manager, wq.work); > > + struct drm_device *dev = container_of(bm, struct drm_device, bm); > > + > > + DRM_DEBUG("Delayed delete Worker\n"); > > + > > + mutex_lock(&dev->struct_mutex); > > + if (!bm->initialized) { > > + mutex_unlock(&dev->struct_mutex); > > + return; > > + } > > + drm_bo_delayed_delete(dev, 0); > > + if (bm->initialized && !list_empty(&bm->ddestroy)) { > > + schedule_delayed_work(&bm->wq, > > + ((DRM_HZ / 100) < 1) ? 1 : DRM_HZ / 100); > > + } > > + mutex_unlock(&dev->struct_mutex); > > +} > > + > > +void drm_bo_usage_deref_locked(struct drm_buffer_object **bo) > > +{ > > + struct drm_buffer_object *tmp_bo = *bo; > > + bo = NULL; > > + > > + DRM_ASSERT_LOCKED(&tmp_bo->dev->struct_mutex); > > + > > + if (atomic_dec_and_test(&tmp_bo->usage)) > > + drm_bo_destroy_locked(tmp_bo); > > +} > > +EXPORT_SYMBOL(drm_bo_usage_deref_locked); > > + > > +void drm_bo_usage_deref_unlocked(struct drm_buffer_object **bo) > > +{ > > + struct drm_buffer_object *tmp_bo = *bo; > > + struct drm_device *dev = tmp_bo->dev; > > + > > + *bo = NULL; > > + if (atomic_dec_and_test(&tmp_bo->usage)) { > > + mutex_lock(&dev->struct_mutex); > > + if (atomic_read(&tmp_bo->usage) == 0) > > + drm_bo_destroy_locked(tmp_bo); > > + mutex_unlock(&dev->struct_mutex); > > + } > > +} > > +EXPORT_SYMBOL(drm_bo_usage_deref_unlocked); > > + > > +void drm_putback_buffer_objects(struct drm_device *dev) > > +{ > > + struct drm_buffer_manager *bm = &dev->bm; > > + struct list_head *list = &bm->unfenced; > > + struct drm_buffer_object *entry, *next; > > + > > + mutex_lock(&dev->struct_mutex); > > + list_for_each_entry_safe(entry, next, list, lru) { > > + atomic_inc(&entry->usage); > > + mutex_unlock(&dev->struct_mutex); > > + > > + mutex_lock(&entry->mutex); > > + BUG_ON(!(entry->priv_flags & _DRM_BO_FLAG_UNFENCED)); > > + mutex_lock(&dev->struct_mutex); > > + > > + list_del_init(&entry->lru); > > + DRM_FLAG_MASKED(entry->priv_flags, 0, _DRM_BO_FLAG_UNFENCED); > > + wake_up_all(&entry->event_queue); > > + > > + /* > > + * FIXME: Might want to put back on head of list > > + * instead of tail here. > > + */ > > + > > + drm_bo_add_to_lru(entry); > > + mutex_unlock(&entry->mutex); > > + drm_bo_usage_deref_locked(&entry); > > + } > > + mutex_unlock(&dev->struct_mutex); > > +} > > +EXPORT_SYMBOL(drm_putback_buffer_objects); > > + > > +/* > > + * Note. The caller has to register (if applicable) > > + * and deregister fence object usage. > > + */ > > + > > +int drm_fence_buffer_objects(struct drm_device *dev, > > + struct list_head *list, > > + uint32_t fence_flags, > > + struct drm_fence_object *fence, > > + struct drm_fence_object **used_fence) > > +{ > > + struct drm_buffer_manager *bm = &dev->bm; > > + struct drm_buffer_object *entry; > > + uint32_t fence_type = 0; > > + uint32_t fence_class = ~0; > > + int count = 0; > > + int ret = 0; > > + struct list_head *l; > > + > > + mutex_lock(&dev->struct_mutex); > > + > > + if (!list) > > + list = &bm->unfenced; > > + > > + if (fence) > > + fence_class = fence->fence_class; > > + > > + list_for_each_entry(entry, list, lru) { > > + BUG_ON(!(entry->priv_flags & _DRM_BO_FLAG_UNFENCED)); > > + fence_type |= entry->new_fence_type; > > + if (fence_class == ~0) > > + fence_class = entry->new_fence_class; > > + else if (entry->new_fence_class != fence_class) { > > + DRM_ERROR("Unmatching fence classes on unfenced list: " > > + "%d and %d.\n", > > + fence_class, > > + entry->new_fence_class); > > + ret = -EINVAL; > > + goto out; > > + } > > + count++; > > + } > > + > > + if (!count) { > > + ret = -EINVAL; > > + goto out; > > + } > > + > > + if (fence) { > > + if ((fence_type & fence->type) != fence_type || > > + (fence->fence_class != fence_class)) { > > + DRM_ERROR("Given fence doesn't match buffers " > > + "on unfenced list.\n"); > > + ret = -EINVAL; > > + goto out; > > + } > > + } else { > > + mutex_unlock(&dev->struct_mutex); > > + ret = drm_fence_object_create(dev, fence_class, fence_type, > > + fence_flags | DRM_FENCE_FLAG_EMIT, > > + &fence); > > + mutex_lock(&dev->struct_mutex); > > + if (ret) > > + goto out; > > + } > > + > > + count = 0; > > + l = list->next; > > + while (l != list) { > > + prefetch(l->next); > > + entry = list_entry(l, struct drm_buffer_object, lru); > > + atomic_inc(&entry->usage); > > + mutex_unlock(&dev->struct_mutex); > > + mutex_lock(&entry->mutex); > > + mutex_lock(&dev->struct_mutex); > > + list_del_init(l); > > + if (entry->priv_flags & _DRM_BO_FLAG_UNFENCED) { > > + count++; > > + if (entry->fence) > > + drm_fence_usage_deref_locked(&entry->fence); > > + entry->fence = drm_fence_reference_locked(fence); > > + entry->fence_class = entry->new_fence_class; > > + entry->fence_type = entry->new_fence_type; > > + DRM_FLAG_MASKED(entry->priv_flags, 0, > > + _DRM_BO_FLAG_UNFENCED); > > + wake_up_all(&entry->event_queue); > > + drm_bo_add_to_lru(entry); > > + } > > + mutex_unlock(&entry->mutex); > > + drm_bo_usage_deref_locked(&entry); > > + l = list->next; > > + } > > + DRM_DEBUG("Fenced %d buffers\n", count); > > +out: > > + mutex_unlock(&dev->struct_mutex); > > + *used_fence = fence; > > + return ret; > > +} > > +EXPORT_SYMBOL(drm_fence_buffer_objects); > > + > > +/* > > + * bo->mutex locked > > + */ > > + > > +static int drm_bo_evict(struct drm_buffer_object *bo, unsigned mem_type, > > + int no_wait) > > +{ > > + int ret = 0; > > + struct drm_device *dev = bo->dev; > > + struct drm_bo_mem_reg evict_mem; > > + > > + /* > > + * Someone might have modified the buffer before we took the > > + * buffer mutex. > > + */ > > + > > + do { > > + bo->priv_flags &= ~_DRM_BO_FLAG_UNLOCKED; > > + > > + if (unlikely(bo->mem.flags & > > + (DRM_BO_FLAG_NO_MOVE | DRM_BO_FLAG_NO_EVICT))) > > + goto out_unlock; > > + if (unlikely(bo->priv_flags & _DRM_BO_FLAG_UNFENCED)) > > + goto out_unlock; > > + if (unlikely(bo->mem.mem_type != mem_type)) > > + goto out_unlock; > > + ret = drm_bo_wait(bo, 0, 1, no_wait, 0); > > + if (ret) > > + goto out_unlock; > > + > > + } while(bo->priv_flags & _DRM_BO_FLAG_UNLOCKED); > > + > > + evict_mem = bo->mem; > > + evict_mem.mm_node = NULL; > > + > > + evict_mem = bo->mem; > > + evict_mem.proposed_flags = dev->driver->bo_driver->evict_flags(bo); > > + > > + mutex_lock(&dev->struct_mutex); > > + list_del_init(&bo->lru); > > + mutex_unlock(&dev->struct_mutex); > > + > > + ret = drm_bo_mem_space(bo, &evict_mem, no_wait); > > + > > + if (ret) { > > + if (ret != -EAGAIN) > > + DRM_ERROR("Failed to find memory space for " > > + "buffer 0x%p eviction.\n", bo); > > + goto out; > > + } > > + > > + ret = drm_bo_handle_move_mem(bo, &evict_mem, 1, no_wait); > > + > > + if (ret) { > > + if (ret != -EAGAIN) > > + DRM_ERROR("Buffer eviction failed\n"); > > + goto out; > > + } > > + > > + DRM_FLAG_MASKED(bo->priv_flags, _DRM_BO_FLAG_EVICTED, > > + _DRM_BO_FLAG_EVICTED); > > + > > +out: > > + mutex_lock(&dev->struct_mutex); > > + if (evict_mem.mm_node) { > > + if (evict_mem.mm_node != bo->pinned_node) > > + drm_mm_put_block(evict_mem.mm_node); > > + evict_mem.mm_node = NULL; > > + } > > + drm_bo_add_to_lru(bo); > > + BUG_ON(bo->priv_flags & _DRM_BO_FLAG_UNLOCKED); > > +out_unlock: > > + mutex_unlock(&dev->struct_mutex); > > + > > + return ret; > > +} > > + > > +/** > > + * Repeatedly evict memory from the LRU for @mem_type until we create enough > > + * space, or we've evicted everything and there isn't enough space. > > + */ > > +static int drm_bo_mem_force_space(struct drm_device *dev, > > + struct drm_bo_mem_reg *mem, > > + uint32_t mem_type, int no_wait) > > +{ > > + struct drm_mm_node *node; > > + struct drm_buffer_manager *bm = &dev->bm; > > + struct drm_buffer_object *entry; > > + struct drm_mem_type_manager *man = &bm->man[mem_type]; > > + struct list_head *lru; > > + unsigned long num_pages = mem->num_pages; > > + int ret; > > + > > + mutex_lock(&dev->struct_mutex); > > + do { > > + node = drm_mm_search_free(&man->manager, num_pages, > > + mem->page_alignment, 1); > > + if (node) > > + break; > > + > > + lru = &man->lru; > > + if (lru->next == lru) > > + break; > > + > > + entry = list_entry(lru->next, struct drm_buffer_object, lru); > > + atomic_inc(&entry->usage); > > + mutex_unlock(&dev->struct_mutex); > > + mutex_lock(&entry->mutex); > > + ret = drm_bo_evict(entry, mem_type, no_wait); > > + mutex_unlock(&entry->mutex); > > + drm_bo_usage_deref_unlocked(&entry); > > + if (ret) > > + return ret; > > + mutex_lock(&dev->struct_mutex); > > + } while (1); > > + > > + if (!node) { > > + mutex_unlock(&dev->struct_mutex); > > + return -ENOMEM; > > + } > > + > > + node = drm_mm_get_block(node, num_pages, mem->page_alignment); > > + if (unlikely(!node)) { > > + mutex_unlock(&dev->struct_mutex); > > + return -ENOMEM; > > + } > > + > > + mutex_unlock(&dev->struct_mutex); > > + mem->mm_node = node; > > + mem->mem_type = mem_type; > > + return 0; > > +} > > + > > +static int drm_bo_mt_compatible(struct drm_mem_type_manager *man, > > + int disallow_fixed, > > + uint32_t mem_type, > > + uint64_t mask, uint32_t *res_mask) > > +{ > > + uint64_t cur_flags = drm_bo_type_flags(mem_type); > > + uint64_t flag_diff; > > + > > + if ((man->flags & _DRM_FLAG_MEMTYPE_FIXED) && disallow_fixed) > > + return 0; > > + if (man->flags & _DRM_FLAG_MEMTYPE_CACHED) > > + cur_flags |= DRM_BO_FLAG_CACHED; > > + if (man->flags & _DRM_FLAG_MEMTYPE_MAPPABLE) > > + cur_flags |= DRM_BO_FLAG_MAPPABLE; > > + if (man->flags & _DRM_FLAG_MEMTYPE_CSELECT) > > + DRM_FLAG_MASKED(cur_flags, mask, DRM_BO_FLAG_CACHED); > > + > > + if ((cur_flags & mask & DRM_BO_MASK_MEM) == 0) > > + return 0; > > + > > + if (mem_type == DRM_BO_MEM_LOCAL) { > > + *res_mask = cur_flags; > > + return 1; > > + } > > + > > + flag_diff = (mask ^ cur_flags); > > + if (flag_diff & DRM_BO_FLAG_CACHED_MAPPED) > > + c... [truncated message content] |
From: Thomas H. <th...@tu...> - 2008-10-31 11:01:41
|
Jesse Barnes wrote: > This commit adds the core mode setting routines for use by DRM drivers to > manage outputs and displays. Originally based on the X.Org Randr 1.2 > implementation, the code has since been heavily changed by Dave Airlie > with contributions by Jesse Barnes, Jakob Bornecrantz and others. > > This one should probably be split up a bit; I think the TTM stuff in > particular could be factored out fairly easily. > > Jesse, We must split out TTM from anything that goes into DRM next for now, as we're about to re-add it in a device dependant form with a well defined kernel only API. (This is probably going to happen within a couple of weeks). A minimal user-space API will be added when there are drivers supporting it. I guess the first one will be a reworked via driver following up with other work. So for now, I guess the best thing is to strip the TTM parts completely and not consider the drivers that rely on it. I have a patch lying around that strips TTM from modesetting-gem and disables the build of radeon, radeon-ms and nouvea, if that would help... /Thomas |
From: Dave A. <ai...@li...> - 2008-10-31 22:37:28
|
> We must split out TTM from anything that goes into DRM next for now, as > we're about to re-add it in a device dependant > form with a well defined kernel only API. (This is probably going to > happen within a couple of weeks). > > A minimal user-space API will be added when there are drivers supporting > it. I guess the first one will be a reworked via driver following up > with other work. > > So for now, I guess the best thing is to strip the TTM parts completely > and not consider the drivers that rely on it. > > I have a patch lying around that strips TTM from modesetting-gem and > disables the build of radeon, radeon-ms and nouvea, if that would help... No don't do that, I have a working radeon driver in there and would like to keep working on it. I think we can prepare core/intel patches without touching this stuff. When you can publish the new TTM changes I can look at rebasing radeon on top of them. Dave. > > /Thomas > > > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > -- > _______________________________________________ > Dri-devel mailing list > Dri...@li... > https://lists.sourceforge.net/lists/listinfo/dri-devel > > |
From: Jesse B. <jb...@vi...> - 2008-10-31 18:02:44
|
On Friday, October 31, 2008 4:01 am Thomas Hellström wrote: > Jesse Barnes wrote: > > This commit adds the core mode setting routines for use by DRM drivers to > > manage outputs and displays. Originally based on the X.Org Randr 1.2 > > implementation, the code has since been heavily changed by Dave Airlie > > with contributions by Jesse Barnes, Jakob Bornecrantz and others. > > > > This one should probably be split up a bit; I think the TTM stuff in > > particular could be factored out fairly easily. > > Jesse, > We must split out TTM from anything that goes into DRM next for now, as > we're about to re-add it in a device dependant > form with a well defined kernel only API. (This is probably going to > happen within a couple of weeks). > > A minimal user-space API will be added when there are drivers supporting > it. I guess the first one will be a reworked via driver following up > with other work. > > So for now, I guess the best thing is to strip the TTM parts completely > and not consider the drivers that rely on it. > > I have a patch lying around that strips TTM from modesetting-gem and > disables the build of radeon, radeon-ms and nouvea, if that would help... Yeah that would help a bit. I'm not in a position to push those bits anyway, so it would be good if my next set of patches didn't have any TTM or radeon bits... Thanks, Jesse |
From: Dave A. <ai...@gm...> - 2008-10-31 21:18:30
|
On Sat, Nov 1, 2008 at 4:02 AM, Jesse Barnes <jb...@vi...> wrote: > On Friday, October 31, 2008 4:01 am Thomas Hellström wrote: >> Jesse Barnes wrote: >> > This commit adds the core mode setting routines for use by DRM drivers to >> > manage outputs and displays. Originally based on the X.Org Randr 1.2 >> > implementation, the code has since been heavily changed by Dave Airlie >> > with contributions by Jesse Barnes, Jakob Bornecrantz and others. >> > >> > This one should probably be split up a bit; I think the TTM stuff in >> > particular could be factored out fairly easily. >> >> Jesse, >> We must split out TTM from anything that goes into DRM next for now, as >> we're about to re-add it in a device dependant >> form with a well defined kernel only API. (This is probably going to >> happen within a couple of weeks). >> >> A minimal user-space API will be added when there are drivers supporting >> it. I guess the first one will be a reworked via driver following up >> with other work. >> >> So for now, I guess the best thing is to strip the TTM parts completely >> and not consider the drivers that rely on it. >> >> I have a patch lying around that strips TTM from modesetting-gem and >> disables the build of radeon, radeon-ms and nouvea, if that would help... > > Yeah that would help a bit. I'm not in a position to push those bits anyway, > so it would be good if my next set of patches didn't have any TTM or radeon > bits... > My tree in drm-rawhide shouldn't have many TTM + modeset commits. My plan was to take the core modesetting additions patch 75432e26f0a14a30da437018938d4c04a8faa00e db392921b6e5d051b6651c1f8d47875da789fb44 get a tree with just those in it, copy over the latest version of the modesetting files from the head of the branch and make that the base patch. Then get the multi-master bits (can go before or after modesetting in theory). Then drop the Intel driver on top. Nearly all the work on drm-rawhide has been on radeon mm and kms so the core kms code hasn't see major amount of changes. We should ignore radeon from an upstream perspective for the moment as the TTM interfacing needs to happen first. Dave. |
From: Jesse B. <jb...@vi...> - 2008-11-03 20:26:13
|
On Friday, October 31, 2008 2:18 pm Dave Airlie wrote: > On Sat, Nov 1, 2008 at 4:02 AM, Jesse Barnes <jb...@vi...> wrote: > > On Friday, October 31, 2008 4:01 am Thomas Hellström wrote: > >> Jesse Barnes wrote: > >> > This commit adds the core mode setting routines for use by DRM drivers > >> > to manage outputs and displays. Originally based on the X.Org Randr > >> > 1.2 implementation, the code has since been heavily changed by Dave > >> > Airlie with contributions by Jesse Barnes, Jakob Bornecrantz and > >> > others. > >> > > >> > This one should probably be split up a bit; I think the TTM stuff in > >> > particular could be factored out fairly easily. > >> > >> Jesse, > >> We must split out TTM from anything that goes into DRM next for now, as > >> we're about to re-add it in a device dependant > >> form with a well defined kernel only API. (This is probably going to > >> happen within a couple of weeks). > >> > >> A minimal user-space API will be added when there are drivers supporting > >> it. I guess the first one will be a reworked via driver following up > >> with other work. > >> > >> So for now, I guess the best thing is to strip the TTM parts completely > >> and not consider the drivers that rely on it. > >> > >> I have a patch lying around that strips TTM from modesetting-gem and > >> disables the build of radeon, radeon-ms and nouvea, if that would > >> help... > > > > Yeah that would help a bit. I'm not in a position to push those bits > > anyway, so it would be good if my next set of patches didn't have any TTM > > or radeon bits... > > My tree in drm-rawhide shouldn't have many TTM + modeset commits. > > My plan was to take the core modesetting additions patch > 75432e26f0a14a30da437018938d4c04a8faa00e > db392921b6e5d051b6651c1f8d47875da789fb44 > > get a tree with just those in it, copy over the latest version of the > modesetting files from > the head of the branch and make that the base patch. > > Then get the multi-master bits (can go before or after modesetting in > theory). > > Then drop the Intel driver on top. Sounds reasonable; I was mainly looking at the radeon stuff to get a feel for how it impacts the core. So do you have a tree with the changes above? If so I can get the intel driver working with it and ignore all the radeon/TTM stuff... Thanks, Jesse |
From: Jesse B. <jb...@vi...> - 2008-11-03 21:22:50
|
On Monday, November 3, 2008 8:26 am Jesse Barnes wrote: > On Friday, October 31, 2008 2:18 pm Dave Airlie wrote: > > On Sat, Nov 1, 2008 at 4:02 AM, Jesse Barnes <jb...@vi...> > > wrote: > > > On Friday, October 31, 2008 4:01 am Thomas Hellström wrote: > > >> Jesse Barnes wrote: > > >> > This commit adds the core mode setting routines for use by DRM > > >> > drivers to manage outputs and displays. Originally based on the > > >> > X.Org Randr 1.2 implementation, the code has since been heavily > > >> > changed by Dave Airlie with contributions by Jesse Barnes, Jakob > > >> > Bornecrantz and others. > > >> > > > >> > This one should probably be split up a bit; I think the TTM stuff in > > >> > particular could be factored out fairly easily. > > >> > > >> Jesse, > > >> We must split out TTM from anything that goes into DRM next for now, > > >> as we're about to re-add it in a device dependant > > >> form with a well defined kernel only API. (This is probably going to > > >> happen within a couple of weeks). > > >> > > >> A minimal user-space API will be added when there are drivers > > >> supporting it. I guess the first one will be a reworked via driver > > >> following up with other work. > > >> > > >> So for now, I guess the best thing is to strip the TTM parts > > >> completely and not consider the drivers that rely on it. > > >> > > >> I have a patch lying around that strips TTM from modesetting-gem and > > >> disables the build of radeon, radeon-ms and nouvea, if that would > > >> help... > > > > > > Yeah that would help a bit. I'm not in a position to push those bits > > > anyway, so it would be good if my next set of patches didn't have any > > > TTM or radeon bits... > > > > My tree in drm-rawhide shouldn't have many TTM + modeset commits. > > > > My plan was to take the core modesetting additions patch > > 75432e26f0a14a30da437018938d4c04a8faa00e > > db392921b6e5d051b6651c1f8d47875da789fb44 > > > > get a tree with just those in it, copy over the latest version of the > > modesetting files from > > the head of the branch and make that the base patch. > > > > Then get the multi-master bits (can go before or after modesetting in > > theory). > > > > Then drop the Intel driver on top. > > Sounds reasonable; I was mainly looking at the radeon stuff to get a feel > for how it impacts the core. So do you have a tree with the changes above? > If so I can get the intel driver working with it and ignore all the > radeon/TTM stuff... Now that I've looked at it a little it seems like the mods are a bit more interrelated than I'd hoped. It might be easier to just drop the TTM bits from the core mode setting patch I posted, though those are pretty tangled up too. Jesse |